|
[2025-08-14 02:35:09,604][04513] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-08-14 02:35:09,606][04513] Rollout worker 0 uses device cpu |
|
[2025-08-14 02:35:09,607][04513] Rollout worker 1 uses device cpu |
|
[2025-08-14 02:35:09,608][04513] Rollout worker 2 uses device cpu |
|
[2025-08-14 02:35:09,609][04513] Rollout worker 3 uses device cpu |
|
[2025-08-14 02:35:09,610][04513] Rollout worker 4 uses device cpu |
|
[2025-08-14 02:35:09,611][04513] Rollout worker 5 uses device cpu |
|
[2025-08-14 02:35:09,612][04513] Rollout worker 6 uses device cpu |
|
[2025-08-14 02:35:09,613][04513] Rollout worker 7 uses device cpu |
|
[2025-08-14 02:35:09,790][04513] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-14 02:35:09,791][04513] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-08-14 02:35:09,839][04513] Starting all processes... |
|
[2025-08-14 02:35:09,844][04513] Starting process learner_proc0 |
|
[2025-08-14 02:35:09,925][04513] Starting all processes... |
|
[2025-08-14 02:35:09,941][04513] Starting process inference_proc0-0 |
|
[2025-08-14 02:35:09,942][04513] Starting process rollout_proc0 |
|
[2025-08-14 02:35:09,942][04513] Starting process rollout_proc1 |
|
[2025-08-14 02:35:09,943][04513] Starting process rollout_proc2 |
|
[2025-08-14 02:35:09,943][04513] Starting process rollout_proc3 |
|
[2025-08-14 02:35:09,943][04513] Starting process rollout_proc4 |
|
[2025-08-14 02:35:09,943][04513] Starting process rollout_proc5 |
|
[2025-08-14 02:35:09,943][04513] Starting process rollout_proc6 |
|
[2025-08-14 02:35:09,943][04513] Starting process rollout_proc7 |
|
[2025-08-14 02:35:27,025][04840] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-14 02:35:27,032][04840] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-08-14 02:35:27,140][04840] Num visible devices: 1 |
|
[2025-08-14 02:35:27,157][04840] Starting seed is not provided |
|
[2025-08-14 02:35:27,158][04840] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-14 02:35:27,158][04840] Initializing actor-critic model on device cuda:0 |
|
[2025-08-14 02:35:27,159][04840] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-14 02:35:27,165][04840] RunningMeanStd input shape: (1,) |
|
[2025-08-14 02:35:27,239][04840] ConvEncoder: input_channels=3 |
|
[2025-08-14 02:35:27,452][04858] Worker 5 uses CPU cores [1] |
|
[2025-08-14 02:35:27,637][04856] Worker 3 uses CPU cores [1] |
|
[2025-08-14 02:35:27,718][04854] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-14 02:35:27,719][04854] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-08-14 02:35:27,790][04854] Num visible devices: 1 |
|
[2025-08-14 02:35:27,834][04853] Worker 0 uses CPU cores [0] |
|
[2025-08-14 02:35:27,857][04861] Worker 7 uses CPU cores [1] |
|
[2025-08-14 02:35:27,882][04855] Worker 1 uses CPU cores [1] |
|
[2025-08-14 02:35:27,933][04860] Worker 6 uses CPU cores [0] |
|
[2025-08-14 02:35:27,959][04859] Worker 2 uses CPU cores [0] |
|
[2025-08-14 02:35:27,980][04857] Worker 4 uses CPU cores [0] |
|
[2025-08-14 02:35:27,997][04840] Conv encoder output size: 512 |
|
[2025-08-14 02:35:27,997][04840] Policy head output size: 512 |
|
[2025-08-14 02:35:28,060][04840] Created Actor Critic model with architecture: |
|
[2025-08-14 02:35:28,061][04840] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-08-14 02:35:28,326][04840] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-08-14 02:35:29,781][04513] Heartbeat connected on Batcher_0 |
|
[2025-08-14 02:35:29,792][04513] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-08-14 02:35:29,798][04513] Heartbeat connected on RolloutWorker_w0 |
|
[2025-08-14 02:35:29,804][04513] Heartbeat connected on RolloutWorker_w1 |
|
[2025-08-14 02:35:29,808][04513] Heartbeat connected on RolloutWorker_w2 |
|
[2025-08-14 02:35:29,828][04513] Heartbeat connected on RolloutWorker_w3 |
|
[2025-08-14 02:35:29,835][04513] Heartbeat connected on RolloutWorker_w4 |
|
[2025-08-14 02:35:29,841][04513] Heartbeat connected on RolloutWorker_w6 |
|
[2025-08-14 02:35:29,842][04513] Heartbeat connected on RolloutWorker_w7 |
|
[2025-08-14 02:35:29,846][04513] Heartbeat connected on RolloutWorker_w5 |
|
[2025-08-14 02:35:33,086][04840] No checkpoints found |
|
[2025-08-14 02:35:33,086][04840] Did not load from checkpoint, starting from scratch! |
|
[2025-08-14 02:35:33,087][04840] Initialized policy 0 weights for model version 0 |
|
[2025-08-14 02:35:33,090][04840] LearnerWorker_p0 finished initialization! |
|
[2025-08-14 02:35:33,091][04840] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-14 02:35:33,103][04513] Heartbeat connected on LearnerWorker_p0 |
|
[2025-08-14 02:35:33,236][04854] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-14 02:35:33,238][04854] RunningMeanStd input shape: (1,) |
|
[2025-08-14 02:35:33,249][04854] ConvEncoder: input_channels=3 |
|
[2025-08-14 02:35:33,375][04854] Conv encoder output size: 512 |
|
[2025-08-14 02:35:33,376][04854] Policy head output size: 512 |
|
[2025-08-14 02:35:33,417][04513] Inference worker 0-0 is ready! |
|
[2025-08-14 02:35:33,418][04513] All inference workers are ready! Signal rollout workers to start! |
|
[2025-08-14 02:35:33,575][04513] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-08-14 02:35:33,694][04855] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 02:35:33,693][04860] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 02:35:33,722][04861] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 02:35:33,731][04856] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 02:35:33,734][04857] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 02:35:33,770][04859] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 02:35:33,772][04858] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 02:35:33,787][04853] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 02:35:35,590][04855] Decorrelating experience for 0 frames... |
|
[2025-08-14 02:35:35,590][04859] Decorrelating experience for 0 frames... |
|
[2025-08-14 02:35:35,592][04856] Decorrelating experience for 0 frames... |
|
[2025-08-14 02:35:35,592][04860] Decorrelating experience for 0 frames... |
|
[2025-08-14 02:35:36,042][04856] Decorrelating experience for 32 frames... |
|
[2025-08-14 02:35:36,148][04853] Decorrelating experience for 0 frames... |
|
[2025-08-14 02:35:37,072][04853] Decorrelating experience for 32 frames... |
|
[2025-08-14 02:35:37,121][04860] Decorrelating experience for 32 frames... |
|
[2025-08-14 02:35:37,171][04855] Decorrelating experience for 32 frames... |
|
[2025-08-14 02:35:38,497][04856] Decorrelating experience for 64 frames... |
|
[2025-08-14 02:35:38,575][04513] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-08-14 02:35:38,642][04859] Decorrelating experience for 32 frames... |
|
[2025-08-14 02:35:38,685][04855] Decorrelating experience for 64 frames... |
|
[2025-08-14 02:35:38,925][04853] Decorrelating experience for 64 frames... |
|
[2025-08-14 02:35:38,954][04860] Decorrelating experience for 64 frames... |
|
[2025-08-14 02:35:39,447][04855] Decorrelating experience for 96 frames... |
|
[2025-08-14 02:35:39,831][04856] Decorrelating experience for 96 frames... |
|
[2025-08-14 02:35:40,169][04859] Decorrelating experience for 64 frames... |
|
[2025-08-14 02:35:40,197][04860] Decorrelating experience for 96 frames... |
|
[2025-08-14 02:35:40,550][04853] Decorrelating experience for 96 frames... |
|
[2025-08-14 02:35:40,802][04859] Decorrelating experience for 96 frames... |
|
[2025-08-14 02:35:43,575][04513] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 142.0. Samples: 1420. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-08-14 02:35:43,577][04513] Avg episode reward: [(0, '2.988')] |
|
[2025-08-14 02:35:44,032][04840] Signal inference workers to stop experience collection... |
|
[2025-08-14 02:35:44,044][04854] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-08-14 02:35:45,433][04840] Signal inference workers to resume experience collection... |
|
[2025-08-14 02:35:45,436][04854] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-08-14 02:35:48,575][04513] Fps is (10 sec: 2048.0, 60 sec: 1365.3, 300 sec: 1365.3). Total num frames: 20480. Throughput: 0: 354.7. Samples: 5320. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:35:48,576][04513] Avg episode reward: [(0, '3.872')] |
|
[2025-08-14 02:35:53,575][04513] Fps is (10 sec: 3276.9, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 32768. Throughput: 0: 364.3. Samples: 7286. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:35:53,577][04513] Avg episode reward: [(0, '4.059')] |
|
[2025-08-14 02:35:55,094][04854] Updated weights for policy 0, policy_version 10 (0.0096) |
|
[2025-08-14 02:35:58,575][04513] Fps is (10 sec: 2867.2, 60 sec: 1966.1, 300 sec: 1966.1). Total num frames: 49152. Throughput: 0: 498.6. Samples: 12466. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2025-08-14 02:35:58,578][04513] Avg episode reward: [(0, '4.394')] |
|
[2025-08-14 02:36:03,575][04513] Fps is (10 sec: 3276.6, 60 sec: 2184.5, 300 sec: 2184.5). Total num frames: 65536. Throughput: 0: 587.1. Samples: 17612. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:36:03,579][04513] Avg episode reward: [(0, '4.459')] |
|
[2025-08-14 02:36:08,012][04854] Updated weights for policy 0, policy_version 20 (0.0021) |
|
[2025-08-14 02:36:08,575][04513] Fps is (10 sec: 3276.8, 60 sec: 2340.6, 300 sec: 2340.6). Total num frames: 81920. Throughput: 0: 555.1. Samples: 19430. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:36:08,576][04513] Avg episode reward: [(0, '4.350')] |
|
[2025-08-14 02:36:13,575][04513] Fps is (10 sec: 3686.6, 60 sec: 2560.0, 300 sec: 2560.0). Total num frames: 102400. Throughput: 0: 624.6. Samples: 24982. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:36:13,579][04513] Avg episode reward: [(0, '4.476')] |
|
[2025-08-14 02:36:13,585][04840] Saving new best policy, reward=4.476! |
|
[2025-08-14 02:36:18,575][04513] Fps is (10 sec: 3686.4, 60 sec: 2639.6, 300 sec: 2639.6). Total num frames: 118784. Throughput: 0: 672.1. Samples: 30244. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:36:18,576][04513] Avg episode reward: [(0, '4.430')] |
|
[2025-08-14 02:36:19,786][04854] Updated weights for policy 0, policy_version 30 (0.0020) |
|
[2025-08-14 02:36:23,575][04513] Fps is (10 sec: 2867.2, 60 sec: 2621.4, 300 sec: 2621.4). Total num frames: 131072. Throughput: 0: 711.2. Samples: 32006. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:36:23,576][04513] Avg episode reward: [(0, '4.478')] |
|
[2025-08-14 02:36:23,581][04840] Saving new best policy, reward=4.478! |
|
[2025-08-14 02:36:28,575][04513] Fps is (10 sec: 3276.8, 60 sec: 2755.5, 300 sec: 2755.5). Total num frames: 151552. Throughput: 0: 803.4. Samples: 37574. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:36:28,577][04513] Avg episode reward: [(0, '4.532')] |
|
[2025-08-14 02:36:28,580][04840] Saving new best policy, reward=4.532! |
|
[2025-08-14 02:36:31,265][04854] Updated weights for policy 0, policy_version 40 (0.0014) |
|
[2025-08-14 02:36:33,575][04513] Fps is (10 sec: 3686.4, 60 sec: 2798.9, 300 sec: 2798.9). Total num frames: 167936. Throughput: 0: 832.2. Samples: 42770. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:36:33,581][04513] Avg episode reward: [(0, '4.596')] |
|
[2025-08-14 02:36:33,599][04840] Saving new best policy, reward=4.596! |
|
[2025-08-14 02:36:38,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 2835.7). Total num frames: 184320. Throughput: 0: 827.6. Samples: 44526. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:36:38,579][04513] Avg episode reward: [(0, '4.484')] |
|
[2025-08-14 02:36:43,487][04854] Updated weights for policy 0, policy_version 50 (0.0016) |
|
[2025-08-14 02:36:43,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 2925.7). Total num frames: 204800. Throughput: 0: 843.9. Samples: 50440. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:36:43,579][04513] Avg episode reward: [(0, '4.349')] |
|
[2025-08-14 02:36:48,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 2949.1). Total num frames: 221184. Throughput: 0: 843.7. Samples: 55580. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:36:48,578][04513] Avg episode reward: [(0, '4.144')] |
|
[2025-08-14 02:36:53,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 2969.6). Total num frames: 237568. Throughput: 0: 851.0. Samples: 57724. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:36:53,581][04513] Avg episode reward: [(0, '4.247')] |
|
[2025-08-14 02:36:55,635][04854] Updated weights for policy 0, policy_version 60 (0.0018) |
|
[2025-08-14 02:36:58,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 2987.7). Total num frames: 253952. Throughput: 0: 855.6. Samples: 63486. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:36:58,576][04513] Avg episode reward: [(0, '4.324')] |
|
[2025-08-14 02:37:03,577][04513] Fps is (10 sec: 3276.1, 60 sec: 3413.2, 300 sec: 3003.7). Total num frames: 270336. Throughput: 0: 844.2. Samples: 68236. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:37:03,580][04513] Avg episode reward: [(0, '4.348')] |
|
[2025-08-14 02:37:03,588][04840] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000066_270336.pth... |
|
[2025-08-14 02:37:08,293][04854] Updated weights for policy 0, policy_version 70 (0.0015) |
|
[2025-08-14 02:37:08,575][04513] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3018.1). Total num frames: 286720. Throughput: 0: 850.9. Samples: 70296. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:37:08,577][04513] Avg episode reward: [(0, '4.419')] |
|
[2025-08-14 02:37:13,575][04513] Fps is (10 sec: 3277.5, 60 sec: 3345.1, 300 sec: 3031.0). Total num frames: 303104. Throughput: 0: 852.8. Samples: 75952. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:37:13,579][04513] Avg episode reward: [(0, '4.622')] |
|
[2025-08-14 02:37:13,585][04840] Saving new best policy, reward=4.622! |
|
[2025-08-14 02:37:18,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3042.7). Total num frames: 319488. Throughput: 0: 833.8. Samples: 80290. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:37:18,576][04513] Avg episode reward: [(0, '4.633')] |
|
[2025-08-14 02:37:18,578][04840] Saving new best policy, reward=4.633! |
|
[2025-08-14 02:37:21,067][04854] Updated weights for policy 0, policy_version 80 (0.0015) |
|
[2025-08-14 02:37:23,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3053.4). Total num frames: 335872. Throughput: 0: 845.3. Samples: 82566. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:37:23,579][04513] Avg episode reward: [(0, '4.418')] |
|
[2025-08-14 02:37:28,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3063.1). Total num frames: 352256. Throughput: 0: 838.8. Samples: 88186. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:37:28,580][04513] Avg episode reward: [(0, '4.253')] |
|
[2025-08-14 02:37:33,349][04854] Updated weights for policy 0, policy_version 90 (0.0018) |
|
[2025-08-14 02:37:33,578][04513] Fps is (10 sec: 3275.7, 60 sec: 3344.9, 300 sec: 3071.9). Total num frames: 368640. Throughput: 0: 820.7. Samples: 92516. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:37:33,583][04513] Avg episode reward: [(0, '4.306')] |
|
[2025-08-14 02:37:38,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3080.2). Total num frames: 385024. Throughput: 0: 826.0. Samples: 94892. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:37:38,576][04513] Avg episode reward: [(0, '4.521')] |
|
[2025-08-14 02:37:43,575][04513] Fps is (10 sec: 3277.8, 60 sec: 3276.8, 300 sec: 3087.7). Total num frames: 401408. Throughput: 0: 823.7. Samples: 100552. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:37:43,579][04513] Avg episode reward: [(0, '4.612')] |
|
[2025-08-14 02:37:45,041][04854] Updated weights for policy 0, policy_version 100 (0.0014) |
|
[2025-08-14 02:37:48,579][04513] Fps is (10 sec: 3275.5, 60 sec: 3276.6, 300 sec: 3094.7). Total num frames: 417792. Throughput: 0: 813.0. Samples: 104822. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:37:48,581][04513] Avg episode reward: [(0, '4.555')] |
|
[2025-08-14 02:37:53,575][04513] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3101.2). Total num frames: 434176. Throughput: 0: 825.1. Samples: 107426. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:37:53,579][04513] Avg episode reward: [(0, '4.684')] |
|
[2025-08-14 02:37:53,586][04840] Saving new best policy, reward=4.684! |
|
[2025-08-14 02:37:57,290][04854] Updated weights for policy 0, policy_version 110 (0.0017) |
|
[2025-08-14 02:37:58,575][04513] Fps is (10 sec: 3687.9, 60 sec: 3345.1, 300 sec: 3135.6). Total num frames: 454656. Throughput: 0: 824.9. Samples: 113072. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:37:58,579][04513] Avg episode reward: [(0, '4.705')] |
|
[2025-08-14 02:37:58,582][04840] Saving new best policy, reward=4.705! |
|
[2025-08-14 02:38:03,578][04513] Fps is (10 sec: 3275.8, 60 sec: 3276.7, 300 sec: 3112.9). Total num frames: 466944. Throughput: 0: 818.2. Samples: 117110. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:38:03,580][04513] Avg episode reward: [(0, '4.535')] |
|
[2025-08-14 02:38:08,575][04513] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3118.2). Total num frames: 483328. Throughput: 0: 825.7. Samples: 119724. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:38:08,579][04513] Avg episode reward: [(0, '4.434')] |
|
[2025-08-14 02:38:10,185][04854] Updated weights for policy 0, policy_version 120 (0.0016) |
|
[2025-08-14 02:38:13,575][04513] Fps is (10 sec: 3687.7, 60 sec: 3345.1, 300 sec: 3148.8). Total num frames: 503808. Throughput: 0: 824.8. Samples: 125304. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:38:13,579][04513] Avg episode reward: [(0, '4.262')] |
|
[2025-08-14 02:38:18,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3127.9). Total num frames: 516096. Throughput: 0: 821.4. Samples: 129476. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:38:18,576][04513] Avg episode reward: [(0, '4.227')] |
|
[2025-08-14 02:38:22,700][04854] Updated weights for policy 0, policy_version 130 (0.0021) |
|
[2025-08-14 02:38:23,575][04513] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3132.2). Total num frames: 532480. Throughput: 0: 832.5. Samples: 132354. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:38:23,576][04513] Avg episode reward: [(0, '4.237')] |
|
[2025-08-14 02:38:28,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3159.8). Total num frames: 552960. Throughput: 0: 839.4. Samples: 138326. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:38:28,576][04513] Avg episode reward: [(0, '4.382')] |
|
[2025-08-14 02:38:33,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3345.2, 300 sec: 3163.0). Total num frames: 569344. Throughput: 0: 842.3. Samples: 142724. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:38:33,576][04513] Avg episode reward: [(0, '4.416')] |
|
[2025-08-14 02:38:34,426][04854] Updated weights for policy 0, policy_version 140 (0.0017) |
|
[2025-08-14 02:38:38,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3188.2). Total num frames: 589824. Throughput: 0: 849.7. Samples: 145662. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:38:38,579][04513] Avg episode reward: [(0, '4.505')] |
|
[2025-08-14 02:38:43,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3190.6). Total num frames: 606208. Throughput: 0: 856.8. Samples: 151628. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:38:43,579][04513] Avg episode reward: [(0, '4.569')] |
|
[2025-08-14 02:38:46,267][04854] Updated weights for policy 0, policy_version 150 (0.0021) |
|
[2025-08-14 02:38:48,575][04513] Fps is (10 sec: 3276.7, 60 sec: 3413.6, 300 sec: 3192.8). Total num frames: 622592. Throughput: 0: 863.3. Samples: 155954. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:38:48,576][04513] Avg episode reward: [(0, '4.456')] |
|
[2025-08-14 02:38:53,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3215.4). Total num frames: 643072. Throughput: 0: 873.4. Samples: 159026. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:38:53,576][04513] Avg episode reward: [(0, '4.499')] |
|
[2025-08-14 02:38:56,452][04854] Updated weights for policy 0, policy_version 160 (0.0014) |
|
[2025-08-14 02:38:58,577][04513] Fps is (10 sec: 3685.8, 60 sec: 3413.2, 300 sec: 3216.8). Total num frames: 659456. Throughput: 0: 876.7. Samples: 164756. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-08-14 02:38:58,580][04513] Avg episode reward: [(0, '4.658')] |
|
[2025-08-14 02:39:03,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3481.8, 300 sec: 3218.3). Total num frames: 675840. Throughput: 0: 886.1. Samples: 169350. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:39:03,576][04513] Avg episode reward: [(0, '4.794')] |
|
[2025-08-14 02:39:03,589][04840] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000165_675840.pth... |
|
[2025-08-14 02:39:03,720][04840] Saving new best policy, reward=4.794! |
|
[2025-08-14 02:39:08,575][04513] Fps is (10 sec: 3277.4, 60 sec: 3481.6, 300 sec: 3219.6). Total num frames: 692224. Throughput: 0: 884.1. Samples: 172138. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:39:08,583][04513] Avg episode reward: [(0, '4.781')] |
|
[2025-08-14 02:39:08,900][04854] Updated weights for policy 0, policy_version 170 (0.0014) |
|
[2025-08-14 02:39:13,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3220.9). Total num frames: 708608. Throughput: 0: 872.7. Samples: 177596. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 02:39:13,583][04513] Avg episode reward: [(0, '4.616')] |
|
[2025-08-14 02:39:18,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3222.2). Total num frames: 724992. Throughput: 0: 884.2. Samples: 182512. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:39:18,577][04513] Avg episode reward: [(0, '4.637')] |
|
[2025-08-14 02:39:20,671][04854] Updated weights for policy 0, policy_version 180 (0.0014) |
|
[2025-08-14 02:39:23,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3241.2). Total num frames: 745472. Throughput: 0: 883.2. Samples: 185406. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:39:23,576][04513] Avg episode reward: [(0, '4.690')] |
|
[2025-08-14 02:39:28,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3241.9). Total num frames: 761856. Throughput: 0: 864.7. Samples: 190540. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:39:28,576][04513] Avg episode reward: [(0, '4.559')] |
|
[2025-08-14 02:39:32,970][04854] Updated weights for policy 0, policy_version 190 (0.0014) |
|
[2025-08-14 02:39:33,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3242.7). Total num frames: 778240. Throughput: 0: 875.5. Samples: 195350. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:39:33,576][04513] Avg episode reward: [(0, '4.635')] |
|
[2025-08-14 02:39:38,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3260.1). Total num frames: 798720. Throughput: 0: 867.5. Samples: 198062. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:39:38,576][04513] Avg episode reward: [(0, '4.826')] |
|
[2025-08-14 02:39:38,581][04840] Saving new best policy, reward=4.826! |
|
[2025-08-14 02:39:43,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3244.0). Total num frames: 811008. Throughput: 0: 845.4. Samples: 202796. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:39:43,578][04513] Avg episode reward: [(0, '4.905')] |
|
[2025-08-14 02:39:43,595][04840] Saving new best policy, reward=4.905! |
|
[2025-08-14 02:39:45,751][04854] Updated weights for policy 0, policy_version 200 (0.0013) |
|
[2025-08-14 02:39:48,575][04513] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3244.7). Total num frames: 827392. Throughput: 0: 857.8. Samples: 207952. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:39:48,579][04513] Avg episode reward: [(0, '4.665')] |
|
[2025-08-14 02:39:53,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3261.0). Total num frames: 847872. Throughput: 0: 864.3. Samples: 211032. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:39:53,579][04513] Avg episode reward: [(0, '4.539')] |
|
[2025-08-14 02:39:56,137][04854] Updated weights for policy 0, policy_version 210 (0.0022) |
|
[2025-08-14 02:39:58,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3413.4, 300 sec: 3261.3). Total num frames: 864256. Throughput: 0: 855.7. Samples: 216102. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:39:58,585][04513] Avg episode reward: [(0, '4.522')] |
|
[2025-08-14 02:40:03,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3276.8). Total num frames: 884736. Throughput: 0: 871.6. Samples: 221734. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:40:03,579][04513] Avg episode reward: [(0, '4.819')] |
|
[2025-08-14 02:40:07,372][04854] Updated weights for policy 0, policy_version 220 (0.0013) |
|
[2025-08-14 02:40:08,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3291.7). Total num frames: 905216. Throughput: 0: 874.7. Samples: 224766. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:40:08,579][04513] Avg episode reward: [(0, '4.856')] |
|
[2025-08-14 02:40:13,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3276.8). Total num frames: 917504. Throughput: 0: 867.4. Samples: 229574. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:40:13,576][04513] Avg episode reward: [(0, '4.883')] |
|
[2025-08-14 02:40:18,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3291.2). Total num frames: 937984. Throughput: 0: 894.8. Samples: 235618. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:40:18,580][04513] Avg episode reward: [(0, '4.773')] |
|
[2025-08-14 02:40:18,647][04854] Updated weights for policy 0, policy_version 230 (0.0015) |
|
[2025-08-14 02:40:23,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3305.0). Total num frames: 958464. Throughput: 0: 902.4. Samples: 238668. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:40:23,581][04513] Avg episode reward: [(0, '4.593')] |
|
[2025-08-14 02:40:28,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3304.6). Total num frames: 974848. Throughput: 0: 895.9. Samples: 243110. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:40:28,580][04513] Avg episode reward: [(0, '4.725')] |
|
[2025-08-14 02:40:30,564][04854] Updated weights for policy 0, policy_version 240 (0.0014) |
|
[2025-08-14 02:40:33,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3360.1). Total num frames: 991232. Throughput: 0: 912.1. Samples: 248998. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:40:33,580][04513] Avg episode reward: [(0, '4.772')] |
|
[2025-08-14 02:40:38,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3429.5). Total num frames: 1011712. Throughput: 0: 911.9. Samples: 252068. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:40:38,576][04513] Avg episode reward: [(0, '4.987')] |
|
[2025-08-14 02:40:38,577][04840] Saving new best policy, reward=4.987! |
|
[2025-08-14 02:40:42,549][04854] Updated weights for policy 0, policy_version 250 (0.0024) |
|
[2025-08-14 02:40:43,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 1024000. Throughput: 0: 891.9. Samples: 256236. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:40:43,578][04513] Avg episode reward: [(0, '4.832')] |
|
[2025-08-14 02:40:48,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3429.5). Total num frames: 1044480. Throughput: 0: 898.9. Samples: 262186. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:40:48,581][04513] Avg episode reward: [(0, '4.894')] |
|
[2025-08-14 02:40:53,151][04854] Updated weights for policy 0, policy_version 260 (0.0017) |
|
[2025-08-14 02:40:53,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3443.4). Total num frames: 1064960. Throughput: 0: 898.3. Samples: 265188. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:40:53,578][04513] Avg episode reward: [(0, '4.817')] |
|
[2025-08-14 02:40:58,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3429.5). Total num frames: 1077248. Throughput: 0: 888.4. Samples: 269554. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:40:58,576][04513] Avg episode reward: [(0, '4.872')] |
|
[2025-08-14 02:41:03,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 1097728. Throughput: 0: 886.4. Samples: 275508. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:41:03,578][04513] Avg episode reward: [(0, '4.813')] |
|
[2025-08-14 02:41:03,585][04840] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000268_1097728.pth... |
|
[2025-08-14 02:41:03,698][04840] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000066_270336.pth |
|
[2025-08-14 02:41:04,893][04854] Updated weights for policy 0, policy_version 270 (0.0017) |
|
[2025-08-14 02:41:08,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1114112. Throughput: 0: 885.3. Samples: 278506. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:41:08,582][04513] Avg episode reward: [(0, '4.842')] |
|
[2025-08-14 02:41:13,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3429.5). Total num frames: 1130496. Throughput: 0: 882.3. Samples: 282814. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:41:13,576][04513] Avg episode reward: [(0, '4.704')] |
|
[2025-08-14 02:41:16,474][04854] Updated weights for policy 0, policy_version 280 (0.0015) |
|
[2025-08-14 02:41:18,576][04513] Fps is (10 sec: 4095.5, 60 sec: 3618.1, 300 sec: 3471.2). Total num frames: 1155072. Throughput: 0: 888.7. Samples: 288992. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:41:18,581][04513] Avg episode reward: [(0, '4.667')] |
|
[2025-08-14 02:41:23,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 1171456. Throughput: 0: 884.1. Samples: 291852. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:41:23,578][04513] Avg episode reward: [(0, '4.617')] |
|
[2025-08-14 02:41:28,031][04854] Updated weights for policy 0, policy_version 290 (0.0023) |
|
[2025-08-14 02:41:28,575][04513] Fps is (10 sec: 3277.2, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 1187840. Throughput: 0: 896.8. Samples: 296590. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:41:28,579][04513] Avg episode reward: [(0, '4.746')] |
|
[2025-08-14 02:41:33,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3471.2). Total num frames: 1208320. Throughput: 0: 903.9. Samples: 302860. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:41:33,576][04513] Avg episode reward: [(0, '4.755')] |
|
[2025-08-14 02:41:38,577][04513] Fps is (10 sec: 3685.6, 60 sec: 3549.7, 300 sec: 3457.3). Total num frames: 1224704. Throughput: 0: 894.8. Samples: 305454. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:41:38,578][04513] Avg episode reward: [(0, '4.538')] |
|
[2025-08-14 02:41:39,579][04854] Updated weights for policy 0, policy_version 300 (0.0016) |
|
[2025-08-14 02:41:43,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3457.3). Total num frames: 1241088. Throughput: 0: 909.0. Samples: 310458. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:41:43,578][04513] Avg episode reward: [(0, '4.789')] |
|
[2025-08-14 02:41:48,575][04513] Fps is (10 sec: 4096.9, 60 sec: 3686.4, 300 sec: 3485.1). Total num frames: 1265664. Throughput: 0: 912.9. Samples: 316590. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:41:48,576][04513] Avg episode reward: [(0, '5.067')] |
|
[2025-08-14 02:41:48,580][04840] Saving new best policy, reward=5.067! |
|
[2025-08-14 02:41:49,631][04854] Updated weights for policy 0, policy_version 310 (0.0016) |
|
[2025-08-14 02:41:53,575][04513] Fps is (10 sec: 3686.3, 60 sec: 3549.8, 300 sec: 3471.2). Total num frames: 1277952. Throughput: 0: 899.2. Samples: 318970. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:41:53,577][04513] Avg episode reward: [(0, '5.172')] |
|
[2025-08-14 02:41:53,586][04840] Saving new best policy, reward=5.172! |
|
[2025-08-14 02:41:58,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3485.1). Total num frames: 1298432. Throughput: 0: 919.1. Samples: 324172. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:41:58,576][04513] Avg episode reward: [(0, '4.775')] |
|
[2025-08-14 02:42:01,198][04854] Updated weights for policy 0, policy_version 320 (0.0018) |
|
[2025-08-14 02:42:03,575][04513] Fps is (10 sec: 4096.1, 60 sec: 3686.4, 300 sec: 3499.0). Total num frames: 1318912. Throughput: 0: 920.3. Samples: 330406. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:42:03,576][04513] Avg episode reward: [(0, '4.982')] |
|
[2025-08-14 02:42:08,575][04513] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 1331200. Throughput: 0: 903.3. Samples: 332502. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:42:08,581][04513] Avg episode reward: [(0, '5.114')] |
|
[2025-08-14 02:42:12,991][04854] Updated weights for policy 0, policy_version 330 (0.0019) |
|
[2025-08-14 02:42:13,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3499.0). Total num frames: 1351680. Throughput: 0: 916.8. Samples: 337848. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:42:13,576][04513] Avg episode reward: [(0, '5.166')] |
|
[2025-08-14 02:42:18,575][04513] Fps is (10 sec: 4096.1, 60 sec: 3618.2, 300 sec: 3512.8). Total num frames: 1372160. Throughput: 0: 915.1. Samples: 344038. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:42:18,576][04513] Avg episode reward: [(0, '5.234')] |
|
[2025-08-14 02:42:18,577][04840] Saving new best policy, reward=5.234! |
|
[2025-08-14 02:42:23,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 1388544. Throughput: 0: 899.3. Samples: 345922. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:42:23,578][04513] Avg episode reward: [(0, '4.831')] |
|
[2025-08-14 02:42:24,269][04854] Updated weights for policy 0, policy_version 340 (0.0019) |
|
[2025-08-14 02:42:28,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3526.8). Total num frames: 1409024. Throughput: 0: 916.5. Samples: 351700. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:42:28,577][04513] Avg episode reward: [(0, '5.128')] |
|
[2025-08-14 02:42:33,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 1425408. Throughput: 0: 907.8. Samples: 357442. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:42:33,578][04513] Avg episode reward: [(0, '5.406')] |
|
[2025-08-14 02:42:33,589][04840] Saving new best policy, reward=5.406! |
|
[2025-08-14 02:42:35,714][04854] Updated weights for policy 0, policy_version 350 (0.0018) |
|
[2025-08-14 02:42:38,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3618.3, 300 sec: 3526.7). Total num frames: 1441792. Throughput: 0: 895.8. Samples: 359280. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:42:38,579][04513] Avg episode reward: [(0, '5.484')] |
|
[2025-08-14 02:42:38,586][04840] Saving new best policy, reward=5.484! |
|
[2025-08-14 02:42:43,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3540.7). Total num frames: 1462272. Throughput: 0: 912.1. Samples: 365218. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:42:43,581][04513] Avg episode reward: [(0, '5.263')] |
|
[2025-08-14 02:42:46,109][04854] Updated weights for policy 0, policy_version 360 (0.0015) |
|
[2025-08-14 02:42:48,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 1482752. Throughput: 0: 897.9. Samples: 370812. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:42:48,579][04513] Avg episode reward: [(0, '5.113')] |
|
[2025-08-14 02:42:53,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 1499136. Throughput: 0: 898.1. Samples: 372918. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:42:53,576][04513] Avg episode reward: [(0, '5.443')] |
|
[2025-08-14 02:42:57,463][04854] Updated weights for policy 0, policy_version 370 (0.0018) |
|
[2025-08-14 02:42:58,575][04513] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 1515520. Throughput: 0: 915.6. Samples: 379048. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:42:58,577][04513] Avg episode reward: [(0, '5.793')] |
|
[2025-08-14 02:42:58,628][04840] Saving new best policy, reward=5.793! |
|
[2025-08-14 02:43:03,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3568.4). Total num frames: 1536000. Throughput: 0: 892.8. Samples: 384216. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:43:03,576][04513] Avg episode reward: [(0, '5.702')] |
|
[2025-08-14 02:43:03,581][04840] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000375_1536000.pth... |
|
[2025-08-14 02:43:03,707][04840] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000165_675840.pth |
|
[2025-08-14 02:43:08,575][04513] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3554.5). Total num frames: 1552384. Throughput: 0: 901.2. Samples: 386474. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:43:08,581][04513] Avg episode reward: [(0, '5.775')] |
|
[2025-08-14 02:43:09,418][04854] Updated weights for policy 0, policy_version 380 (0.0016) |
|
[2025-08-14 02:43:13,576][04513] Fps is (10 sec: 3686.0, 60 sec: 3686.3, 300 sec: 3582.3). Total num frames: 1572864. Throughput: 0: 905.4. Samples: 392442. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:43:13,578][04513] Avg episode reward: [(0, '5.793')] |
|
[2025-08-14 02:43:18,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 1585152. Throughput: 0: 888.0. Samples: 397402. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:43:18,576][04513] Avg episode reward: [(0, '6.021')] |
|
[2025-08-14 02:43:18,669][04840] Saving new best policy, reward=6.021! |
|
[2025-08-14 02:43:21,345][04854] Updated weights for policy 0, policy_version 390 (0.0015) |
|
[2025-08-14 02:43:23,575][04513] Fps is (10 sec: 3277.2, 60 sec: 3618.1, 300 sec: 3568.4). Total num frames: 1605632. Throughput: 0: 900.8. Samples: 399814. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:43:23,576][04513] Avg episode reward: [(0, '5.793')] |
|
[2025-08-14 02:43:28,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 1626112. Throughput: 0: 903.9. Samples: 405894. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:43:28,576][04513] Avg episode reward: [(0, '5.960')] |
|
[2025-08-14 02:43:32,361][04854] Updated weights for policy 0, policy_version 400 (0.0014) |
|
[2025-08-14 02:43:33,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 1638400. Throughput: 0: 883.5. Samples: 410570. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:43:33,579][04513] Avg episode reward: [(0, '5.916')] |
|
[2025-08-14 02:43:38,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3568.4). Total num frames: 1658880. Throughput: 0: 899.9. Samples: 413414. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:43:38,580][04513] Avg episode reward: [(0, '6.153')] |
|
[2025-08-14 02:43:38,584][04840] Saving new best policy, reward=6.153! |
|
[2025-08-14 02:43:43,149][04854] Updated weights for policy 0, policy_version 410 (0.0019) |
|
[2025-08-14 02:43:43,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 1679360. Throughput: 0: 896.8. Samples: 419402. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:43:43,579][04513] Avg episode reward: [(0, '6.611')] |
|
[2025-08-14 02:43:43,585][04840] Saving new best policy, reward=6.611! |
|
[2025-08-14 02:43:48,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3554.5). Total num frames: 1691648. Throughput: 0: 881.6. Samples: 423888. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:43:48,578][04513] Avg episode reward: [(0, '6.273')] |
|
[2025-08-14 02:43:53,575][04513] Fps is (10 sec: 3276.7, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 1712128. Throughput: 0: 899.2. Samples: 426940. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:43:53,577][04513] Avg episode reward: [(0, '5.972')] |
|
[2025-08-14 02:43:54,752][04854] Updated weights for policy 0, policy_version 420 (0.0014) |
|
[2025-08-14 02:43:58,577][04513] Fps is (10 sec: 4095.3, 60 sec: 3618.0, 300 sec: 3582.2). Total num frames: 1732608. Throughput: 0: 905.0. Samples: 433166. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:43:58,583][04513] Avg episode reward: [(0, '5.943')] |
|
[2025-08-14 02:44:03,575][04513] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 1748992. Throughput: 0: 894.2. Samples: 437640. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:03,580][04513] Avg episode reward: [(0, '6.259')] |
|
[2025-08-14 02:44:06,317][04854] Updated weights for policy 0, policy_version 430 (0.0013) |
|
[2025-08-14 02:44:08,575][04513] Fps is (10 sec: 3687.1, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 1769472. Throughput: 0: 908.3. Samples: 440686. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:08,580][04513] Avg episode reward: [(0, '6.686')] |
|
[2025-08-14 02:44:08,583][04840] Saving new best policy, reward=6.686! |
|
[2025-08-14 02:44:13,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 1785856. Throughput: 0: 904.4. Samples: 446590. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:13,577][04513] Avg episode reward: [(0, '6.561')] |
|
[2025-08-14 02:44:18,575][04513] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 1798144. Throughput: 0: 894.9. Samples: 450840. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:18,576][04513] Avg episode reward: [(0, '6.964')] |
|
[2025-08-14 02:44:18,580][04840] Saving new best policy, reward=6.964! |
|
[2025-08-14 02:44:18,590][04854] Updated weights for policy 0, policy_version 440 (0.0018) |
|
[2025-08-14 02:44:23,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 1818624. Throughput: 0: 894.4. Samples: 453664. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:23,576][04513] Avg episode reward: [(0, '6.871')] |
|
[2025-08-14 02:44:28,578][04513] Fps is (10 sec: 3685.2, 60 sec: 3481.4, 300 sec: 3582.2). Total num frames: 1835008. Throughput: 0: 886.3. Samples: 459290. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:28,579][04513] Avg episode reward: [(0, '6.849')] |
|
[2025-08-14 02:44:30,448][04854] Updated weights for policy 0, policy_version 450 (0.0017) |
|
[2025-08-14 02:44:33,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 1851392. Throughput: 0: 884.2. Samples: 463678. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:33,576][04513] Avg episode reward: [(0, '6.985')] |
|
[2025-08-14 02:44:33,581][04840] Saving new best policy, reward=6.985! |
|
[2025-08-14 02:44:38,575][04513] Fps is (10 sec: 3687.6, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 1871872. Throughput: 0: 878.9. Samples: 466490. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:38,576][04513] Avg episode reward: [(0, '6.899')] |
|
[2025-08-14 02:44:41,413][04854] Updated weights for policy 0, policy_version 460 (0.0014) |
|
[2025-08-14 02:44:43,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3596.1). Total num frames: 1888256. Throughput: 0: 863.2. Samples: 472008. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:43,576][04513] Avg episode reward: [(0, '7.242')] |
|
[2025-08-14 02:44:43,585][04840] Saving new best policy, reward=7.242! |
|
[2025-08-14 02:44:48,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 1904640. Throughput: 0: 862.1. Samples: 476436. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:48,576][04513] Avg episode reward: [(0, '6.955')] |
|
[2025-08-14 02:44:53,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 1921024. Throughput: 0: 860.8. Samples: 479422. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:53,576][04513] Avg episode reward: [(0, '7.180')] |
|
[2025-08-14 02:44:53,701][04854] Updated weights for policy 0, policy_version 470 (0.0017) |
|
[2025-08-14 02:44:58,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.4, 300 sec: 3568.4). Total num frames: 1937408. Throughput: 0: 849.4. Samples: 484812. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:44:58,580][04513] Avg episode reward: [(0, '7.295')] |
|
[2025-08-14 02:44:58,582][04840] Saving new best policy, reward=7.295! |
|
[2025-08-14 02:45:03,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3554.5). Total num frames: 1953792. Throughput: 0: 858.3. Samples: 489462. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-08-14 02:45:03,587][04513] Avg episode reward: [(0, '7.262')] |
|
[2025-08-14 02:45:03,596][04840] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000477_1953792.pth... |
|
[2025-08-14 02:45:03,712][04840] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000268_1097728.pth |
|
[2025-08-14 02:45:06,001][04854] Updated weights for policy 0, policy_version 480 (0.0022) |
|
[2025-08-14 02:45:08,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3582.3). Total num frames: 1974272. Throughput: 0: 857.5. Samples: 492252. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:45:08,576][04513] Avg episode reward: [(0, '7.221')] |
|
[2025-08-14 02:45:13,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3568.4). Total num frames: 1990656. Throughput: 0: 845.4. Samples: 497332. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:45:13,576][04513] Avg episode reward: [(0, '7.412')] |
|
[2025-08-14 02:45:13,581][04840] Saving new best policy, reward=7.412! |
|
[2025-08-14 02:45:18,474][04854] Updated weights for policy 0, policy_version 490 (0.0014) |
|
[2025-08-14 02:45:18,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3554.5). Total num frames: 2007040. Throughput: 0: 852.5. Samples: 502040. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:45:18,576][04513] Avg episode reward: [(0, '7.337')] |
|
[2025-08-14 02:45:23,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3554.5). Total num frames: 2023424. Throughput: 0: 854.2. Samples: 504930. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:45:23,576][04513] Avg episode reward: [(0, '7.191')] |
|
[2025-08-14 02:45:28,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.5, 300 sec: 3554.5). Total num frames: 2039808. Throughput: 0: 838.4. Samples: 509738. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:45:28,576][04513] Avg episode reward: [(0, '6.922')] |
|
[2025-08-14 02:45:30,974][04854] Updated weights for policy 0, policy_version 500 (0.0036) |
|
[2025-08-14 02:45:33,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3540.6). Total num frames: 2056192. Throughput: 0: 849.3. Samples: 514656. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:45:33,583][04513] Avg episode reward: [(0, '7.439')] |
|
[2025-08-14 02:45:33,592][04840] Saving new best policy, reward=7.439! |
|
[2025-08-14 02:45:38,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3554.5). Total num frames: 2072576. Throughput: 0: 844.7. Samples: 517432. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:45:38,581][04513] Avg episode reward: [(0, '7.335')] |
|
[2025-08-14 02:45:43,309][04854] Updated weights for policy 0, policy_version 510 (0.0027) |
|
[2025-08-14 02:45:43,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3540.6). Total num frames: 2088960. Throughput: 0: 829.2. Samples: 522128. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:45:43,583][04513] Avg episode reward: [(0, '7.984')] |
|
[2025-08-14 02:45:43,594][04840] Saving new best policy, reward=7.984! |
|
[2025-08-14 02:45:48,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3526.7). Total num frames: 2105344. Throughput: 0: 833.6. Samples: 526976. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:45:48,580][04513] Avg episode reward: [(0, '7.265')] |
|
[2025-08-14 02:45:53,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3554.5). Total num frames: 2125824. Throughput: 0: 835.7. Samples: 529858. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:45:53,580][04513] Avg episode reward: [(0, '7.973')] |
|
[2025-08-14 02:45:54,393][04854] Updated weights for policy 0, policy_version 520 (0.0021) |
|
[2025-08-14 02:45:58,576][04513] Fps is (10 sec: 3276.5, 60 sec: 3345.0, 300 sec: 3526.7). Total num frames: 2138112. Throughput: 0: 829.4. Samples: 534654. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:45:58,582][04513] Avg episode reward: [(0, '7.779')] |
|
[2025-08-14 02:46:03,575][04513] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3526.7). Total num frames: 2154496. Throughput: 0: 839.9. Samples: 539836. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:46:03,577][04513] Avg episode reward: [(0, '8.552')] |
|
[2025-08-14 02:46:03,587][04840] Saving new best policy, reward=8.552! |
|
[2025-08-14 02:46:06,875][04854] Updated weights for policy 0, policy_version 530 (0.0020) |
|
[2025-08-14 02:46:08,575][04513] Fps is (10 sec: 3686.8, 60 sec: 3345.1, 300 sec: 3540.6). Total num frames: 2174976. Throughput: 0: 837.2. Samples: 542604. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:46:08,576][04513] Avg episode reward: [(0, '8.562')] |
|
[2025-08-14 02:46:08,578][04840] Saving new best policy, reward=8.562! |
|
[2025-08-14 02:46:13,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3499.0). Total num frames: 2187264. Throughput: 0: 827.9. Samples: 546994. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:46:13,576][04513] Avg episode reward: [(0, '8.968')] |
|
[2025-08-14 02:46:13,581][04840] Saving new best policy, reward=8.968! |
|
[2025-08-14 02:46:18,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3512.8). Total num frames: 2207744. Throughput: 0: 836.6. Samples: 552304. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:46:18,576][04513] Avg episode reward: [(0, '9.700')] |
|
[2025-08-14 02:46:18,583][04840] Saving new best policy, reward=9.700! |
|
[2025-08-14 02:46:19,549][04854] Updated weights for policy 0, policy_version 540 (0.0015) |
|
[2025-08-14 02:46:23,578][04513] Fps is (10 sec: 3685.2, 60 sec: 3344.9, 300 sec: 3512.8). Total num frames: 2224128. Throughput: 0: 837.5. Samples: 555122. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:46:23,579][04513] Avg episode reward: [(0, '10.129')] |
|
[2025-08-14 02:46:23,588][04840] Saving new best policy, reward=10.129! |
|
[2025-08-14 02:46:28,575][04513] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3485.1). Total num frames: 2236416. Throughput: 0: 827.4. Samples: 559362. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:46:28,577][04513] Avg episode reward: [(0, '10.541')] |
|
[2025-08-14 02:46:28,580][04840] Saving new best policy, reward=10.541! |
|
[2025-08-14 02:46:32,222][04854] Updated weights for policy 0, policy_version 550 (0.0014) |
|
[2025-08-14 02:46:33,575][04513] Fps is (10 sec: 3277.9, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 2256896. Throughput: 0: 840.8. Samples: 564812. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:46:33,576][04513] Avg episode reward: [(0, '11.545')] |
|
[2025-08-14 02:46:33,582][04840] Saving new best policy, reward=11.545! |
|
[2025-08-14 02:46:38,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 2273280. Throughput: 0: 838.5. Samples: 567590. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:46:38,577][04513] Avg episode reward: [(0, '12.014')] |
|
[2025-08-14 02:46:38,578][04840] Saving new best policy, reward=12.014! |
|
[2025-08-14 02:46:43,575][04513] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3457.3). Total num frames: 2285568. Throughput: 0: 821.8. Samples: 571634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 02:46:43,580][04513] Avg episode reward: [(0, '12.938')] |
|
[2025-08-14 02:46:43,588][04840] Saving new best policy, reward=12.938! |
|
[2025-08-14 02:46:44,875][04854] Updated weights for policy 0, policy_version 560 (0.0018) |
|
[2025-08-14 02:46:48,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3485.1). Total num frames: 2306048. Throughput: 0: 826.5. Samples: 577030. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:46:48,580][04513] Avg episode reward: [(0, '13.332')] |
|
[2025-08-14 02:46:48,586][04840] Saving new best policy, reward=13.332! |
|
[2025-08-14 02:46:53,578][04513] Fps is (10 sec: 3685.2, 60 sec: 3276.6, 300 sec: 3471.1). Total num frames: 2322432. Throughput: 0: 827.5. Samples: 579844. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:46:53,579][04513] Avg episode reward: [(0, '13.511')] |
|
[2025-08-14 02:46:53,588][04840] Saving new best policy, reward=13.511! |
|
[2025-08-14 02:46:57,673][04854] Updated weights for policy 0, policy_version 570 (0.0014) |
|
[2025-08-14 02:46:58,577][04513] Fps is (10 sec: 2866.5, 60 sec: 3276.7, 300 sec: 3443.4). Total num frames: 2334720. Throughput: 0: 819.8. Samples: 583886. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:46:58,578][04513] Avg episode reward: [(0, '13.329')] |
|
[2025-08-14 02:47:03,575][04513] Fps is (10 sec: 3687.5, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 2359296. Throughput: 0: 837.1. Samples: 589974. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:47:03,579][04513] Avg episode reward: [(0, '14.035')] |
|
[2025-08-14 02:47:03,587][04840] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000576_2359296.pth... |
|
[2025-08-14 02:47:03,680][04840] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000375_1536000.pth |
|
[2025-08-14 02:47:03,693][04840] Saving new best policy, reward=14.035! |
|
[2025-08-14 02:47:07,880][04854] Updated weights for policy 0, policy_version 580 (0.0019) |
|
[2025-08-14 02:47:08,575][04513] Fps is (10 sec: 4096.8, 60 sec: 3345.0, 300 sec: 3471.2). Total num frames: 2375680. Throughput: 0: 839.5. Samples: 592896. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:47:08,582][04513] Avg episode reward: [(0, '13.653')] |
|
[2025-08-14 02:47:13,578][04513] Fps is (10 sec: 3275.9, 60 sec: 3413.2, 300 sec: 3457.3). Total num frames: 2392064. Throughput: 0: 843.1. Samples: 597302. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:47:13,580][04513] Avg episode reward: [(0, '13.266')] |
|
[2025-08-14 02:47:18,575][04513] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 2408448. Throughput: 0: 857.3. Samples: 603390. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:47:18,576][04513] Avg episode reward: [(0, '12.522')] |
|
[2025-08-14 02:47:19,473][04854] Updated weights for policy 0, policy_version 590 (0.0013) |
|
[2025-08-14 02:47:23,578][04513] Fps is (10 sec: 3686.5, 60 sec: 3413.4, 300 sec: 3457.3). Total num frames: 2428928. Throughput: 0: 863.4. Samples: 606446. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:47:23,579][04513] Avg episode reward: [(0, '12.845')] |
|
[2025-08-14 02:47:28,575][04513] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 2445312. Throughput: 0: 874.0. Samples: 610962. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:47:28,576][04513] Avg episode reward: [(0, '13.098')] |
|
[2025-08-14 02:47:31,156][04854] Updated weights for policy 0, policy_version 600 (0.0017) |
|
[2025-08-14 02:47:33,575][04513] Fps is (10 sec: 3687.5, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 2465792. Throughput: 0: 891.1. Samples: 617130. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:47:33,579][04513] Avg episode reward: [(0, '13.091')] |
|
[2025-08-14 02:47:38,576][04513] Fps is (10 sec: 3685.9, 60 sec: 3481.5, 300 sec: 3457.3). Total num frames: 2482176. Throughput: 0: 894.2. Samples: 620082. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:47:38,583][04513] Avg episode reward: [(0, '13.677')] |
|
[2025-08-14 02:47:42,599][04854] Updated weights for policy 0, policy_version 610 (0.0014) |
|
[2025-08-14 02:47:43,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2498560. Throughput: 0: 907.4. Samples: 624716. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:47:43,579][04513] Avg episode reward: [(0, '13.890')] |
|
[2025-08-14 02:47:48,575][04513] Fps is (10 sec: 3687.0, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2519040. Throughput: 0: 907.7. Samples: 630822. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:47:48,576][04513] Avg episode reward: [(0, '14.378')] |
|
[2025-08-14 02:47:48,578][04840] Saving new best policy, reward=14.378! |
|
[2025-08-14 02:47:53,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3550.1, 300 sec: 3457.3). Total num frames: 2535424. Throughput: 0: 902.0. Samples: 633484. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:47:53,577][04513] Avg episode reward: [(0, '13.997')] |
|
[2025-08-14 02:47:54,222][04854] Updated weights for policy 0, policy_version 620 (0.0013) |
|
[2025-08-14 02:47:58,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3457.3). Total num frames: 2555904. Throughput: 0: 911.7. Samples: 638326. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:47:58,576][04513] Avg episode reward: [(0, '14.252')] |
|
[2025-08-14 02:48:03,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3471.2). Total num frames: 2576384. Throughput: 0: 913.9. Samples: 644514. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:48:03,576][04513] Avg episode reward: [(0, '13.640')] |
|
[2025-08-14 02:48:04,363][04854] Updated weights for policy 0, policy_version 630 (0.0015) |
|
[2025-08-14 02:48:08,575][04513] Fps is (10 sec: 3276.7, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2588672. Throughput: 0: 902.8. Samples: 647070. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-08-14 02:48:08,577][04513] Avg episode reward: [(0, '13.094')] |
|
[2025-08-14 02:48:13,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3618.3, 300 sec: 3471.2). Total num frames: 2609152. Throughput: 0: 917.7. Samples: 652260. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:48:13,577][04513] Avg episode reward: [(0, '14.066')] |
|
[2025-08-14 02:48:15,818][04854] Updated weights for policy 0, policy_version 640 (0.0017) |
|
[2025-08-14 02:48:18,575][04513] Fps is (10 sec: 4096.1, 60 sec: 3686.4, 300 sec: 3471.2). Total num frames: 2629632. Throughput: 0: 918.8. Samples: 658478. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 02:48:18,576][04513] Avg episode reward: [(0, '14.353')] |
|
[2025-08-14 02:48:23,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3618.3, 300 sec: 3457.3). Total num frames: 2646016. Throughput: 0: 901.2. Samples: 660634. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-08-14 02:48:23,576][04513] Avg episode reward: [(0, '15.599')] |
|
[2025-08-14 02:48:23,588][04840] Saving new best policy, reward=15.599! |
|
[2025-08-14 02:48:27,150][04854] Updated weights for policy 0, policy_version 650 (0.0016) |
|
[2025-08-14 02:48:28,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3485.1). Total num frames: 2666496. Throughput: 0: 918.0. Samples: 666026. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:48:28,576][04513] Avg episode reward: [(0, '16.689')] |
|
[2025-08-14 02:48:28,582][04840] Saving new best policy, reward=16.689! |
|
[2025-08-14 02:48:33,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3485.1). Total num frames: 2686976. Throughput: 0: 917.1. Samples: 672092. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:48:33,582][04513] Avg episode reward: [(0, '17.841')] |
|
[2025-08-14 02:48:33,593][04840] Saving new best policy, reward=17.841! |
|
[2025-08-14 02:48:38,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3457.3). Total num frames: 2699264. Throughput: 0: 901.0. Samples: 674030. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:48:38,576][04513] Avg episode reward: [(0, '16.701')] |
|
[2025-08-14 02:48:39,174][04854] Updated weights for policy 0, policy_version 660 (0.0015) |
|
[2025-08-14 02:48:43,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3485.1). Total num frames: 2719744. Throughput: 0: 918.6. Samples: 679664. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:48:43,576][04513] Avg episode reward: [(0, '16.182')] |
|
[2025-08-14 02:48:48,578][04513] Fps is (10 sec: 4094.7, 60 sec: 3686.2, 300 sec: 3485.0). Total num frames: 2740224. Throughput: 0: 911.8. Samples: 685546. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:48:48,581][04513] Avg episode reward: [(0, '16.361')] |
|
[2025-08-14 02:48:49,761][04854] Updated weights for policy 0, policy_version 670 (0.0031) |
|
[2025-08-14 02:48:53,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3457.3). Total num frames: 2752512. Throughput: 0: 897.0. Samples: 687436. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:48:53,576][04513] Avg episode reward: [(0, '15.604')] |
|
[2025-08-14 02:48:58,575][04513] Fps is (10 sec: 3277.8, 60 sec: 3618.1, 300 sec: 3471.2). Total num frames: 2772992. Throughput: 0: 909.5. Samples: 693188. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:48:58,580][04513] Avg episode reward: [(0, '16.017')] |
|
[2025-08-14 02:49:00,850][04854] Updated weights for policy 0, policy_version 680 (0.0017) |
|
[2025-08-14 02:49:03,579][04513] Fps is (10 sec: 3685.0, 60 sec: 3549.6, 300 sec: 3457.3). Total num frames: 2789376. Throughput: 0: 893.8. Samples: 698704. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:49:03,580][04513] Avg episode reward: [(0, '16.635')] |
|
[2025-08-14 02:49:03,614][04840] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000682_2793472.pth... |
|
[2025-08-14 02:49:03,760][04840] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000477_1953792.pth |
|
[2025-08-14 02:49:08,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3457.3). Total num frames: 2805760. Throughput: 0: 885.5. Samples: 700482. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:49:08,576][04513] Avg episode reward: [(0, '17.558')] |
|
[2025-08-14 02:49:12,941][04854] Updated weights for policy 0, policy_version 690 (0.0013) |
|
[2025-08-14 02:49:13,575][04513] Fps is (10 sec: 3687.8, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2826240. Throughput: 0: 897.8. Samples: 706428. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:49:13,582][04513] Avg episode reward: [(0, '18.217')] |
|
[2025-08-14 02:49:13,592][04840] Saving new best policy, reward=18.217! |
|
[2025-08-14 02:49:18,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2842624. Throughput: 0: 879.6. Samples: 711676. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:49:18,577][04513] Avg episode reward: [(0, '19.099')] |
|
[2025-08-14 02:49:18,592][04840] Saving new best policy, reward=19.099! |
|
[2025-08-14 02:49:23,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2859008. Throughput: 0: 881.6. Samples: 713702. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:49:23,578][04513] Avg episode reward: [(0, '19.558')] |
|
[2025-08-14 02:49:23,584][04840] Saving new best policy, reward=19.558! |
|
[2025-08-14 02:49:24,882][04854] Updated weights for policy 0, policy_version 700 (0.0018) |
|
[2025-08-14 02:49:28,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2879488. Throughput: 0: 890.6. Samples: 719740. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:49:28,576][04513] Avg episode reward: [(0, '19.166')] |
|
[2025-08-14 02:49:33,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 2895872. Throughput: 0: 876.6. Samples: 724992. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:49:33,578][04513] Avg episode reward: [(0, '19.091')] |
|
[2025-08-14 02:49:36,551][04854] Updated weights for policy 0, policy_version 710 (0.0021) |
|
[2025-08-14 02:49:38,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2916352. Throughput: 0: 889.2. Samples: 727450. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:49:38,577][04513] Avg episode reward: [(0, '19.043')] |
|
[2025-08-14 02:49:43,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 2936832. Throughput: 0: 901.3. Samples: 733748. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:49:43,576][04513] Avg episode reward: [(0, '19.984')] |
|
[2025-08-14 02:49:43,581][04840] Saving new best policy, reward=19.984! |
|
[2025-08-14 02:49:46,726][04854] Updated weights for policy 0, policy_version 720 (0.0014) |
|
[2025-08-14 02:49:48,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3550.0, 300 sec: 3499.0). Total num frames: 2953216. Throughput: 0: 887.2. Samples: 738626. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:49:48,580][04513] Avg episode reward: [(0, '18.661')] |
|
[2025-08-14 02:49:53,575][04513] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3512.8). Total num frames: 2973696. Throughput: 0: 910.9. Samples: 741474. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:49:53,577][04513] Avg episode reward: [(0, '19.341')] |
|
[2025-08-14 02:49:57,513][04854] Updated weights for policy 0, policy_version 730 (0.0019) |
|
[2025-08-14 02:49:58,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 2990080. Throughput: 0: 918.4. Samples: 747758. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:49:58,579][04513] Avg episode reward: [(0, '19.998')] |
|
[2025-08-14 02:49:58,625][04840] Saving new best policy, reward=19.998! |
|
[2025-08-14 02:50:03,575][04513] Fps is (10 sec: 3276.9, 60 sec: 3618.4, 300 sec: 3499.0). Total num frames: 3006464. Throughput: 0: 903.8. Samples: 752346. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:50:03,580][04513] Avg episode reward: [(0, '20.516')] |
|
[2025-08-14 02:50:03,594][04840] Saving new best policy, reward=20.516! |
|
[2025-08-14 02:50:08,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3512.8). Total num frames: 3026944. Throughput: 0: 924.0. Samples: 755284. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:50:08,577][04513] Avg episode reward: [(0, '19.655')] |
|
[2025-08-14 02:50:09,188][04854] Updated weights for policy 0, policy_version 740 (0.0017) |
|
[2025-08-14 02:50:13,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3047424. Throughput: 0: 930.1. Samples: 761596. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-08-14 02:50:13,576][04513] Avg episode reward: [(0, '20.299')] |
|
[2025-08-14 02:50:18,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3063808. Throughput: 0: 912.7. Samples: 766062. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:50:18,576][04513] Avg episode reward: [(0, '20.394')] |
|
[2025-08-14 02:50:20,645][04854] Updated weights for policy 0, policy_version 750 (0.0035) |
|
[2025-08-14 02:50:23,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3540.6). Total num frames: 3084288. Throughput: 0: 928.2. Samples: 769218. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:50:23,577][04513] Avg episode reward: [(0, '19.345')] |
|
[2025-08-14 02:50:28,575][04513] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3554.5). Total num frames: 3104768. Throughput: 0: 927.2. Samples: 775472. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:50:28,580][04513] Avg episode reward: [(0, '17.782')] |
|
[2025-08-14 02:50:31,838][04854] Updated weights for policy 0, policy_version 760 (0.0020) |
|
[2025-08-14 02:50:33,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 3117056. Throughput: 0: 914.8. Samples: 779790. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:50:33,578][04513] Avg episode reward: [(0, '18.391')] |
|
[2025-08-14 02:50:38,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3554.5). Total num frames: 3137536. Throughput: 0: 918.0. Samples: 782786. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:50:38,578][04513] Avg episode reward: [(0, '16.650')] |
|
[2025-08-14 02:50:42,350][04854] Updated weights for policy 0, policy_version 770 (0.0023) |
|
[2025-08-14 02:50:43,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 3153920. Throughput: 0: 914.4. Samples: 788906. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:50:43,580][04513] Avg episode reward: [(0, '16.373')] |
|
[2025-08-14 02:50:48,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 3170304. Throughput: 0: 909.5. Samples: 793274. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:50:48,576][04513] Avg episode reward: [(0, '16.565')] |
|
[2025-08-14 02:50:53,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3568.4). Total num frames: 3190784. Throughput: 0: 911.0. Samples: 796278. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:50:53,576][04513] Avg episode reward: [(0, '17.760')] |
|
[2025-08-14 02:50:54,101][04854] Updated weights for policy 0, policy_version 780 (0.0019) |
|
[2025-08-14 02:50:58,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3568.4). Total num frames: 3207168. Throughput: 0: 899.4. Samples: 802068. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:50:58,576][04513] Avg episode reward: [(0, '17.467')] |
|
[2025-08-14 02:51:03,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 3223552. Throughput: 0: 898.4. Samples: 806488. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:51:03,576][04513] Avg episode reward: [(0, '18.403')] |
|
[2025-08-14 02:51:03,584][04840] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000787_3223552.pth... |
|
[2025-08-14 02:51:03,694][04840] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000576_2359296.pth |
|
[2025-08-14 02:51:06,261][04854] Updated weights for policy 0, policy_version 790 (0.0020) |
|
[2025-08-14 02:51:08,575][04513] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 3244032. Throughput: 0: 891.6. Samples: 809340. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:51:08,576][04513] Avg episode reward: [(0, '20.393')] |
|
[2025-08-14 02:51:13,577][04513] Fps is (10 sec: 3685.6, 60 sec: 3549.7, 300 sec: 3568.4). Total num frames: 3260416. Throughput: 0: 874.4. Samples: 814824. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:51:13,578][04513] Avg episode reward: [(0, '20.449')] |
|
[2025-08-14 02:51:18,478][04854] Updated weights for policy 0, policy_version 800 (0.0014) |
|
[2025-08-14 02:51:18,575][04513] Fps is (10 sec: 3276.9, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 3276800. Throughput: 0: 881.6. Samples: 819460. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:51:18,580][04513] Avg episode reward: [(0, '19.003')] |
|
[2025-08-14 02:51:23,575][04513] Fps is (10 sec: 3687.2, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 3297280. Throughput: 0: 882.7. Samples: 822506. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:51:23,576][04513] Avg episode reward: [(0, '19.739')] |
|
[2025-08-14 02:51:28,575][04513] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 3313664. Throughput: 0: 870.8. Samples: 828092. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:51:28,578][04513] Avg episode reward: [(0, '19.541')] |
|
[2025-08-14 02:51:29,961][04854] Updated weights for policy 0, policy_version 810 (0.0018) |
|
[2025-08-14 02:51:33,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 3330048. Throughput: 0: 885.6. Samples: 833124. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:51:33,576][04513] Avg episode reward: [(0, '19.715')] |
|
[2025-08-14 02:51:38,575][04513] Fps is (10 sec: 3686.6, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 3350528. Throughput: 0: 884.6. Samples: 836086. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:51:38,582][04513] Avg episode reward: [(0, '20.099')] |
|
[2025-08-14 02:51:40,240][04854] Updated weights for policy 0, policy_version 820 (0.0019) |
|
[2025-08-14 02:51:43,575][04513] Fps is (10 sec: 3686.3, 60 sec: 3549.8, 300 sec: 3596.1). Total num frames: 3366912. Throughput: 0: 875.7. Samples: 841474. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:51:43,576][04513] Avg episode reward: [(0, '20.216')] |
|
[2025-08-14 02:51:48,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3596.2). Total num frames: 3383296. Throughput: 0: 893.2. Samples: 846682. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:51:48,579][04513] Avg episode reward: [(0, '20.385')] |
|
[2025-08-14 02:51:52,086][04854] Updated weights for policy 0, policy_version 830 (0.0014) |
|
[2025-08-14 02:51:53,575][04513] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3623.9). Total num frames: 3403776. Throughput: 0: 893.8. Samples: 849562. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:51:53,579][04513] Avg episode reward: [(0, '19.880')] |
|
[2025-08-14 02:51:58,576][04513] Fps is (10 sec: 3276.4, 60 sec: 3481.5, 300 sec: 3582.3). Total num frames: 3416064. Throughput: 0: 876.9. Samples: 854284. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:51:58,583][04513] Avg episode reward: [(0, '19.443')] |
|
[2025-08-14 02:52:03,575][04513] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 3432448. Throughput: 0: 887.1. Samples: 859380. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:52:03,576][04513] Avg episode reward: [(0, '20.337')] |
|
[2025-08-14 02:52:04,584][04854] Updated weights for policy 0, policy_version 840 (0.0020) |
|
[2025-08-14 02:52:08,575][04513] Fps is (10 sec: 3686.9, 60 sec: 3481.6, 300 sec: 3596.2). Total num frames: 3452928. Throughput: 0: 884.3. Samples: 862300. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:52:08,583][04513] Avg episode reward: [(0, '19.587')] |
|
[2025-08-14 02:52:13,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3596.1). Total num frames: 3469312. Throughput: 0: 864.0. Samples: 866972. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:52:13,576][04513] Avg episode reward: [(0, '19.348')] |
|
[2025-08-14 02:52:16,876][04854] Updated weights for policy 0, policy_version 850 (0.0013) |
|
[2025-08-14 02:52:18,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 3485696. Throughput: 0: 872.6. Samples: 872392. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:52:18,576][04513] Avg episode reward: [(0, '19.690')] |
|
[2025-08-14 02:52:23,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3596.2). Total num frames: 3506176. Throughput: 0: 869.7. Samples: 875222. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:52:23,576][04513] Avg episode reward: [(0, '20.436')] |
|
[2025-08-14 02:52:28,575][04513] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3568.4). Total num frames: 3518464. Throughput: 0: 852.8. Samples: 879850. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:52:28,578][04513] Avg episode reward: [(0, '19.864')] |
|
[2025-08-14 02:52:28,908][04854] Updated weights for policy 0, policy_version 860 (0.0015) |
|
[2025-08-14 02:52:33,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 3538944. Throughput: 0: 862.5. Samples: 885494. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:52:33,576][04513] Avg episode reward: [(0, '19.387')] |
|
[2025-08-14 02:52:38,575][04513] Fps is (10 sec: 4096.1, 60 sec: 3481.6, 300 sec: 3596.1). Total num frames: 3559424. Throughput: 0: 863.1. Samples: 888402. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:52:38,576][04513] Avg episode reward: [(0, '19.939')] |
|
[2025-08-14 02:52:39,828][04854] Updated weights for policy 0, policy_version 870 (0.0017) |
|
[2025-08-14 02:52:43,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3568.4). Total num frames: 3571712. Throughput: 0: 853.4. Samples: 892684. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:52:43,577][04513] Avg episode reward: [(0, '19.452')] |
|
[2025-08-14 02:52:48,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 3592192. Throughput: 0: 864.4. Samples: 898276. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:52:48,576][04513] Avg episode reward: [(0, '20.107')] |
|
[2025-08-14 02:52:51,892][04854] Updated weights for policy 0, policy_version 880 (0.0014) |
|
[2025-08-14 02:52:53,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3568.4). Total num frames: 3608576. Throughput: 0: 861.2. Samples: 901054. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:52:53,584][04513] Avg episode reward: [(0, '21.613')] |
|
[2025-08-14 02:52:53,594][04840] Saving new best policy, reward=21.613! |
|
[2025-08-14 02:52:58,575][04513] Fps is (10 sec: 2867.2, 60 sec: 3413.4, 300 sec: 3540.6). Total num frames: 3620864. Throughput: 0: 851.2. Samples: 905278. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:52:58,578][04513] Avg episode reward: [(0, '21.984')] |
|
[2025-08-14 02:52:58,581][04840] Saving new best policy, reward=21.984! |
|
[2025-08-14 02:53:03,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3568.4). Total num frames: 3641344. Throughput: 0: 857.1. Samples: 910960. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:53:03,576][04513] Avg episode reward: [(0, '22.786')] |
|
[2025-08-14 02:53:03,585][04840] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000889_3641344.pth... |
|
[2025-08-14 02:53:03,706][04840] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000682_2793472.pth |
|
[2025-08-14 02:53:03,722][04840] Saving new best policy, reward=22.786! |
|
[2025-08-14 02:53:04,342][04854] Updated weights for policy 0, policy_version 890 (0.0016) |
|
[2025-08-14 02:53:08,577][04513] Fps is (10 sec: 3685.6, 60 sec: 3413.2, 300 sec: 3554.5). Total num frames: 3657728. Throughput: 0: 854.3. Samples: 913666. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:53:08,580][04513] Avg episode reward: [(0, '21.624')] |
|
[2025-08-14 02:53:13,577][04513] Fps is (10 sec: 2866.6, 60 sec: 3344.9, 300 sec: 3526.7). Total num frames: 3670016. Throughput: 0: 841.9. Samples: 917736. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:53:13,583][04513] Avg episode reward: [(0, '21.412')] |
|
[2025-08-14 02:53:16,743][04854] Updated weights for policy 0, policy_version 900 (0.0018) |
|
[2025-08-14 02:53:18,575][04513] Fps is (10 sec: 3277.5, 60 sec: 3413.3, 300 sec: 3540.6). Total num frames: 3690496. Throughput: 0: 843.4. Samples: 923448. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:53:18,578][04513] Avg episode reward: [(0, '19.907')] |
|
[2025-08-14 02:53:23,575][04513] Fps is (10 sec: 3687.2, 60 sec: 3345.1, 300 sec: 3526.7). Total num frames: 3706880. Throughput: 0: 841.4. Samples: 926264. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:53:23,579][04513] Avg episode reward: [(0, '19.484')] |
|
[2025-08-14 02:53:28,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 3723264. Throughput: 0: 836.4. Samples: 930322. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:53:28,583][04513] Avg episode reward: [(0, '18.685')] |
|
[2025-08-14 02:53:29,270][04854] Updated weights for policy 0, policy_version 910 (0.0044) |
|
[2025-08-14 02:53:33,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3526.7). Total num frames: 3739648. Throughput: 0: 839.5. Samples: 936052. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:53:33,576][04513] Avg episode reward: [(0, '19.350')] |
|
[2025-08-14 02:53:38,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3512.8). Total num frames: 3756032. Throughput: 0: 839.4. Samples: 938828. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:53:38,584][04513] Avg episode reward: [(0, '19.998')] |
|
[2025-08-14 02:53:41,928][04854] Updated weights for policy 0, policy_version 920 (0.0016) |
|
[2025-08-14 02:53:43,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 3772416. Throughput: 0: 835.0. Samples: 942852. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:53:43,576][04513] Avg episode reward: [(0, '20.252')] |
|
[2025-08-14 02:53:48,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3526.7). Total num frames: 3792896. Throughput: 0: 838.3. Samples: 948684. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:53:48,578][04513] Avg episode reward: [(0, '19.674')] |
|
[2025-08-14 02:53:53,579][04513] Fps is (10 sec: 3275.4, 60 sec: 3276.6, 300 sec: 3498.9). Total num frames: 3805184. Throughput: 0: 838.9. Samples: 951420. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:53:53,581][04513] Avg episode reward: [(0, '20.050')] |
|
[2025-08-14 02:53:53,720][04854] Updated weights for policy 0, policy_version 930 (0.0017) |
|
[2025-08-14 02:53:58,576][04513] Fps is (10 sec: 3276.4, 60 sec: 3413.3, 300 sec: 3512.9). Total num frames: 3825664. Throughput: 0: 840.6. Samples: 955562. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:53:58,577][04513] Avg episode reward: [(0, '21.213')] |
|
[2025-08-14 02:54:03,575][04513] Fps is (10 sec: 3687.9, 60 sec: 3345.1, 300 sec: 3512.8). Total num frames: 3842048. Throughput: 0: 843.3. Samples: 961398. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:54:03,576][04513] Avg episode reward: [(0, '20.393')] |
|
[2025-08-14 02:54:05,066][04854] Updated weights for policy 0, policy_version 940 (0.0021) |
|
[2025-08-14 02:54:08,575][04513] Fps is (10 sec: 3277.2, 60 sec: 3345.2, 300 sec: 3499.0). Total num frames: 3858432. Throughput: 0: 840.8. Samples: 964100. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:54:08,578][04513] Avg episode reward: [(0, '19.735')] |
|
[2025-08-14 02:54:13,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3413.5, 300 sec: 3499.0). Total num frames: 3874816. Throughput: 0: 844.7. Samples: 968334. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:54:13,576][04513] Avg episode reward: [(0, '21.955')] |
|
[2025-08-14 02:54:17,312][04854] Updated weights for policy 0, policy_version 950 (0.0014) |
|
[2025-08-14 02:54:18,575][04513] Fps is (10 sec: 3686.3, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 3895296. Throughput: 0: 846.2. Samples: 974130. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:54:18,578][04513] Avg episode reward: [(0, '22.777')] |
|
[2025-08-14 02:54:23,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3485.1). Total num frames: 3907584. Throughput: 0: 840.7. Samples: 976658. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:54:23,580][04513] Avg episode reward: [(0, '21.782')] |
|
[2025-08-14 02:54:28,575][04513] Fps is (10 sec: 2867.3, 60 sec: 3345.1, 300 sec: 3485.1). Total num frames: 3923968. Throughput: 0: 847.6. Samples: 980994. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:54:28,581][04513] Avg episode reward: [(0, '21.922')] |
|
[2025-08-14 02:54:29,988][04854] Updated weights for policy 0, policy_version 960 (0.0014) |
|
[2025-08-14 02:54:33,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 3944448. Throughput: 0: 845.4. Samples: 986728. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:54:33,582][04513] Avg episode reward: [(0, '21.362')] |
|
[2025-08-14 02:54:38,575][04513] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 3956736. Throughput: 0: 839.5. Samples: 989194. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 02:54:38,579][04513] Avg episode reward: [(0, '20.915')] |
|
[2025-08-14 02:54:42,515][04854] Updated weights for policy 0, policy_version 970 (0.0022) |
|
[2025-08-14 02:54:43,575][04513] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 3973120. Throughput: 0: 844.2. Samples: 993550. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 02:54:43,580][04513] Avg episode reward: [(0, '21.083')] |
|
[2025-08-14 02:54:48,575][04513] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 3993600. Throughput: 0: 838.7. Samples: 999140. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-14 02:54:48,576][04513] Avg episode reward: [(0, '19.687')] |
|
[2025-08-14 02:54:52,041][04513] Component Batcher_0 stopped! |
|
[2025-08-14 02:54:52,046][04513] Component RolloutWorker_w4 process died already! Don't wait for it. |
|
[2025-08-14 02:54:52,049][04513] Component RolloutWorker_w5 process died already! Don't wait for it. |
|
[2025-08-14 02:54:52,050][04513] Component RolloutWorker_w7 process died already! Don't wait for it. |
|
[2025-08-14 02:54:52,040][04840] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 02:54:52,041][04840] Stopping Batcher_0... |
|
[2025-08-14 02:54:52,066][04840] Loop batcher_evt_loop terminating... |
|
[2025-08-14 02:54:52,155][04854] Weights refcount: 2 0 |
|
[2025-08-14 02:54:52,166][04513] Component InferenceWorker_p0-w0 stopped! |
|
[2025-08-14 02:54:52,168][04854] Stopping InferenceWorker_p0-w0... |
|
[2025-08-14 02:54:52,171][04854] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-08-14 02:54:52,232][04840] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000787_3223552.pth |
|
[2025-08-14 02:54:52,250][04840] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 02:54:52,420][04840] Stopping LearnerWorker_p0... |
|
[2025-08-14 02:54:52,420][04840] Loop learner_proc0_evt_loop terminating... |
|
[2025-08-14 02:54:52,422][04513] Component LearnerWorker_p0 stopped! |
|
[2025-08-14 02:54:52,691][04513] Component RolloutWorker_w1 stopped! |
|
[2025-08-14 02:54:52,691][04855] Stopping RolloutWorker_w1... |
|
[2025-08-14 02:54:52,693][04855] Loop rollout_proc1_evt_loop terminating... |
|
[2025-08-14 02:54:52,706][04513] Component RolloutWorker_w2 stopped! |
|
[2025-08-14 02:54:52,711][04859] Stopping RolloutWorker_w2... |
|
[2025-08-14 02:54:52,715][04856] Stopping RolloutWorker_w3... |
|
[2025-08-14 02:54:52,715][04856] Loop rollout_proc3_evt_loop terminating... |
|
[2025-08-14 02:54:52,715][04513] Component RolloutWorker_w3 stopped! |
|
[2025-08-14 02:54:52,720][04513] Component RolloutWorker_w0 stopped! |
|
[2025-08-14 02:54:52,723][04853] Stopping RolloutWorker_w0... |
|
[2025-08-14 02:54:52,728][04853] Loop rollout_proc0_evt_loop terminating... |
|
[2025-08-14 02:54:52,712][04859] Loop rollout_proc2_evt_loop terminating... |
|
[2025-08-14 02:54:52,737][04513] Component RolloutWorker_w6 stopped! |
|
[2025-08-14 02:54:52,739][04513] Waiting for process learner_proc0 to stop... |
|
[2025-08-14 02:54:52,741][04860] Stopping RolloutWorker_w6... |
|
[2025-08-14 02:54:52,742][04860] Loop rollout_proc6_evt_loop terminating... |
|
[2025-08-14 02:54:54,660][04513] Waiting for process inference_proc0-0 to join... |
|
[2025-08-14 02:54:54,667][04513] Waiting for process rollout_proc0 to join... |
|
[2025-08-14 02:54:56,173][04513] Waiting for process rollout_proc1 to join... |
|
[2025-08-14 02:54:56,174][04513] Waiting for process rollout_proc2 to join... |
|
[2025-08-14 02:54:56,176][04513] Waiting for process rollout_proc3 to join... |
|
[2025-08-14 02:54:56,177][04513] Waiting for process rollout_proc4 to join... |
|
[2025-08-14 02:54:56,178][04513] Waiting for process rollout_proc5 to join... |
|
[2025-08-14 02:54:56,179][04513] Waiting for process rollout_proc6 to join... |
|
[2025-08-14 02:54:56,181][04513] Waiting for process rollout_proc7 to join... |
|
[2025-08-14 02:54:56,182][04513] Batcher 0 profile tree view: |
|
batching: 24.9775, releasing_batches: 0.0278 |
|
[2025-08-14 02:54:56,183][04513] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 454.9610 |
|
update_model: 9.8838 |
|
weight_update: 0.0020 |
|
one_step: 0.0159 |
|
handle_policy_step: 653.1671 |
|
deserialize: 15.4700, stack: 3.9776, obs_to_device_normalize: 144.2500, forward: 347.0820, send_messages: 24.2581 |
|
prepare_outputs: 90.2775 |
|
to_cpu: 55.6015 |
|
[2025-08-14 02:54:56,185][04513] Learner 0 profile tree view: |
|
misc: 0.0041, prepare_batch: 12.2711 |
|
train: 70.5863 |
|
epoch_init: 0.0046, minibatch_init: 0.0086, losses_postprocess: 0.6383, kl_divergence: 0.6556, after_optimizer: 32.6798 |
|
calculate_losses: 24.3698 |
|
losses_init: 0.0052, forward_head: 1.3296, bptt_initial: 16.5481, tail: 1.0042, advantages_returns: 0.2580, losses: 3.0263 |
|
bptt: 1.9513 |
|
bptt_forward_core: 1.8667 |
|
update: 11.7165 |
|
clip: 1.0221 |
|
[2025-08-14 02:54:56,186][04513] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.3970, enqueue_policy_requests: 108.0993, env_step: 905.8047, overhead: 18.7809, complete_rollouts: 9.1959 |
|
save_policy_outputs: 27.7543 |
|
split_output_tensors: 10.4284 |
|
[2025-08-14 02:54:56,187][04513] Loop Runner_EvtLoop terminating... |
|
[2025-08-14 02:54:56,188][04513] Runner profile tree view: |
|
main_loop: 1186.3498 |
|
[2025-08-14 02:54:56,189][04513] Collected {0: 4005888}, FPS: 3376.6 |
|
[2025-08-14 03:04:38,606][04513] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-14 03:04:38,607][04513] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-14 03:04:38,609][04513] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-14 03:04:38,610][04513] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-14 03:04:38,612][04513] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-14 03:04:38,613][04513] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-14 03:04:38,614][04513] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-14 03:04:38,615][04513] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-14 03:04:38,616][04513] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-14 03:04:38,617][04513] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-14 03:04:38,618][04513] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-14 03:04:38,620][04513] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-14 03:04:38,621][04513] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-14 03:04:38,622][04513] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-14 03:04:38,623][04513] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-14 03:04:38,667][04513] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 03:04:38,670][04513] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-14 03:04:38,672][04513] RunningMeanStd input shape: (1,) |
|
[2025-08-14 03:04:38,689][04513] ConvEncoder: input_channels=3 |
|
[2025-08-14 03:04:38,796][04513] Conv encoder output size: 512 |
|
[2025-08-14 03:04:38,797][04513] Policy head output size: 512 |
|
[2025-08-14 03:04:38,986][04513] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:04:38,989][04513] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-14 03:04:38,991][04513] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:04:38,994][04513] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-14 03:04:38,995][04513] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:04:38,997][04513] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-14 03:22:49,952][16404] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-08-14 03:22:49,955][16404] Rollout worker 0 uses device cpu |
|
[2025-08-14 03:22:49,956][16404] Rollout worker 1 uses device cpu |
|
[2025-08-14 03:22:49,958][16404] Rollout worker 2 uses device cpu |
|
[2025-08-14 03:22:49,959][16404] Rollout worker 3 uses device cpu |
|
[2025-08-14 03:22:49,960][16404] Rollout worker 4 uses device cpu |
|
[2025-08-14 03:22:49,961][16404] Rollout worker 5 uses device cpu |
|
[2025-08-14 03:22:49,962][16404] Rollout worker 6 uses device cpu |
|
[2025-08-14 03:22:49,963][16404] Rollout worker 7 uses device cpu |
|
[2025-08-14 03:22:50,066][16404] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-14 03:22:50,067][16404] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-08-14 03:22:50,097][16404] Starting all processes... |
|
[2025-08-14 03:22:50,098][16404] Starting process learner_proc0 |
|
[2025-08-14 03:22:50,151][16404] Starting all processes... |
|
[2025-08-14 03:22:50,157][16404] Starting process inference_proc0-0 |
|
[2025-08-14 03:22:50,157][16404] Starting process rollout_proc0 |
|
[2025-08-14 03:22:50,158][16404] Starting process rollout_proc1 |
|
[2025-08-14 03:22:50,158][16404] Starting process rollout_proc2 |
|
[2025-08-14 03:22:50,158][16404] Starting process rollout_proc3 |
|
[2025-08-14 03:22:50,158][16404] Starting process rollout_proc4 |
|
[2025-08-14 03:22:50,158][16404] Starting process rollout_proc5 |
|
[2025-08-14 03:22:50,158][16404] Starting process rollout_proc6 |
|
[2025-08-14 03:22:50,158][16404] Starting process rollout_proc7 |
|
[2025-08-14 03:23:07,843][18682] Worker 4 uses CPU cores [0] |
|
[2025-08-14 03:23:08,011][18676] Worker 0 uses CPU cores [0] |
|
[2025-08-14 03:23:08,041][18681] Worker 7 uses CPU cores [1] |
|
[2025-08-14 03:23:08,158][18679] Worker 3 uses CPU cores [1] |
|
[2025-08-14 03:23:08,194][18678] Worker 2 uses CPU cores [0] |
|
[2025-08-14 03:23:08,213][18683] Worker 6 uses CPU cores [0] |
|
[2025-08-14 03:23:08,215][18680] Worker 5 uses CPU cores [1] |
|
[2025-08-14 03:23:08,230][18677] Worker 1 uses CPU cores [1] |
|
[2025-08-14 03:23:08,280][18662] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-14 03:23:08,280][18662] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-08-14 03:23:08,302][18662] Num visible devices: 1 |
|
[2025-08-14 03:23:08,304][18662] Starting seed is not provided |
|
[2025-08-14 03:23:08,304][18662] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-14 03:23:08,305][18662] Initializing actor-critic model on device cuda:0 |
|
[2025-08-14 03:23:08,306][18662] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-14 03:23:08,308][18662] RunningMeanStd input shape: (1,) |
|
[2025-08-14 03:23:08,311][18675] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-14 03:23:08,311][18675] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-08-14 03:23:08,329][18662] ConvEncoder: input_channels=3 |
|
[2025-08-14 03:23:08,332][18675] Num visible devices: 1 |
|
[2025-08-14 03:23:08,437][18662] Conv encoder output size: 512 |
|
[2025-08-14 03:23:08,438][18662] Policy head output size: 512 |
|
[2025-08-14 03:23:08,453][18662] Created Actor Critic model with architecture: |
|
[2025-08-14 03:23:08,453][18662] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-08-14 03:23:08,615][18662] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-08-14 03:23:09,621][18662] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:23:09,623][18662] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-14 03:23:09,625][18662] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:23:09,626][18662] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-14 03:23:09,627][18662] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:23:09,628][18662] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-14 03:23:09,629][18662] Did not load from checkpoint, starting from scratch! |
|
[2025-08-14 03:23:09,630][18662] Initialized policy 0 weights for model version 0 |
|
[2025-08-14 03:23:09,638][18662] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-14 03:23:09,647][18662] LearnerWorker_p0 finished initialization! |
|
[2025-08-14 03:23:09,742][18675] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-14 03:23:09,743][18675] RunningMeanStd input shape: (1,) |
|
[2025-08-14 03:23:09,755][18675] ConvEncoder: input_channels=3 |
|
[2025-08-14 03:23:09,859][18675] Conv encoder output size: 512 |
|
[2025-08-14 03:23:09,859][18675] Policy head output size: 512 |
|
[2025-08-14 03:23:09,893][16404] Inference worker 0-0 is ready! |
|
[2025-08-14 03:23:09,894][16404] All inference workers are ready! Signal rollout workers to start! |
|
[2025-08-14 03:23:10,059][16404] Heartbeat connected on Batcher_0 |
|
[2025-08-14 03:23:10,063][16404] Heartbeat connected on LearnerWorker_p0 |
|
[2025-08-14 03:23:10,104][16404] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-08-14 03:23:10,142][18677] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 03:23:10,163][18682] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 03:23:10,169][18681] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 03:23:10,172][18676] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 03:23:10,181][18679] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 03:23:10,192][18680] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 03:23:10,191][18678] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 03:23:10,209][18683] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 03:23:11,593][18676] Decorrelating experience for 0 frames... |
|
[2025-08-14 03:23:11,602][18682] Decorrelating experience for 0 frames... |
|
[2025-08-14 03:23:12,174][18677] Decorrelating experience for 0 frames... |
|
[2025-08-14 03:23:12,204][18681] Decorrelating experience for 0 frames... |
|
[2025-08-14 03:23:12,216][18680] Decorrelating experience for 0 frames... |
|
[2025-08-14 03:23:12,215][18679] Decorrelating experience for 0 frames... |
|
[2025-08-14 03:23:13,772][18676] Decorrelating experience for 32 frames... |
|
[2025-08-14 03:23:13,790][18682] Decorrelating experience for 32 frames... |
|
[2025-08-14 03:23:14,217][18677] Decorrelating experience for 32 frames... |
|
[2025-08-14 03:23:14,246][18680] Decorrelating experience for 32 frames... |
|
[2025-08-14 03:23:14,251][18681] Decorrelating experience for 32 frames... |
|
[2025-08-14 03:23:14,282][18679] Decorrelating experience for 32 frames... |
|
[2025-08-14 03:23:14,625][16404] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-08-14 03:23:14,915][18683] Decorrelating experience for 0 frames... |
|
[2025-08-14 03:23:16,238][18681] Decorrelating experience for 64 frames... |
|
[2025-08-14 03:23:16,558][18678] Decorrelating experience for 0 frames... |
|
[2025-08-14 03:23:16,720][18682] Decorrelating experience for 64 frames... |
|
[2025-08-14 03:23:16,854][18683] Decorrelating experience for 32 frames... |
|
[2025-08-14 03:23:16,856][18676] Decorrelating experience for 64 frames... |
|
[2025-08-14 03:23:17,071][18681] Decorrelating experience for 96 frames... |
|
[2025-08-14 03:23:17,265][16404] Heartbeat connected on RolloutWorker_w7 |
|
[2025-08-14 03:23:17,978][18679] Decorrelating experience for 64 frames... |
|
[2025-08-14 03:23:18,562][18678] Decorrelating experience for 32 frames... |
|
[2025-08-14 03:23:18,723][18682] Decorrelating experience for 96 frames... |
|
[2025-08-14 03:23:18,882][18676] Decorrelating experience for 96 frames... |
|
[2025-08-14 03:23:19,012][16404] Heartbeat connected on RolloutWorker_w4 |
|
[2025-08-14 03:23:19,186][16404] Heartbeat connected on RolloutWorker_w0 |
|
[2025-08-14 03:23:19,233][18683] Decorrelating experience for 64 frames... |
|
[2025-08-14 03:23:19,624][16404] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 53.2. Samples: 266. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-08-14 03:23:19,625][16404] Avg episode reward: [(0, '1.493')] |
|
[2025-08-14 03:23:20,094][18679] Decorrelating experience for 96 frames... |
|
[2025-08-14 03:23:20,315][16404] Heartbeat connected on RolloutWorker_w3 |
|
[2025-08-14 03:23:20,701][18680] Decorrelating experience for 64 frames... |
|
[2025-08-14 03:23:21,373][18678] Decorrelating experience for 64 frames... |
|
[2025-08-14 03:23:23,106][18662] Signal inference workers to stop experience collection... |
|
[2025-08-14 03:23:23,117][18675] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-08-14 03:23:23,329][18678] Decorrelating experience for 96 frames... |
|
[2025-08-14 03:23:23,653][18677] Decorrelating experience for 64 frames... |
|
[2025-08-14 03:23:23,730][18680] Decorrelating experience for 96 frames... |
|
[2025-08-14 03:23:23,873][16404] Heartbeat connected on RolloutWorker_w2 |
|
[2025-08-14 03:23:23,939][18662] Signal inference workers to resume experience collection... |
|
[2025-08-14 03:23:23,940][18675] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-08-14 03:23:24,067][16404] Heartbeat connected on RolloutWorker_w5 |
|
[2025-08-14 03:23:24,624][16404] Fps is (10 sec: 819.3, 60 sec: 819.3, 300 sec: 819.3). Total num frames: 8192. Throughput: 0: 176.4. Samples: 1764. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2025-08-14 03:23:24,628][16404] Avg episode reward: [(0, '3.267')] |
|
[2025-08-14 03:23:26,615][18677] Decorrelating experience for 96 frames... |
|
[2025-08-14 03:23:27,299][16404] Heartbeat connected on RolloutWorker_w1 |
|
[2025-08-14 03:23:27,339][18683] Decorrelating experience for 96 frames... |
|
[2025-08-14 03:23:27,541][16404] Heartbeat connected on RolloutWorker_w6 |
|
[2025-08-14 03:23:29,624][16404] Fps is (10 sec: 2048.0, 60 sec: 1365.5, 300 sec: 1365.5). Total num frames: 20480. Throughput: 0: 327.1. Samples: 4906. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 03:23:29,627][16404] Avg episode reward: [(0, '3.555')] |
|
[2025-08-14 03:23:34,624][16404] Fps is (10 sec: 2867.2, 60 sec: 1843.3, 300 sec: 1843.3). Total num frames: 36864. Throughput: 0: 470.6. Samples: 9412. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:23:34,625][16404] Avg episode reward: [(0, '3.839')] |
|
[2025-08-14 03:23:35,349][18675] Updated weights for policy 0, policy_version 10 (0.0024) |
|
[2025-08-14 03:23:39,624][16404] Fps is (10 sec: 3686.5, 60 sec: 2293.9, 300 sec: 2293.9). Total num frames: 57344. Throughput: 0: 499.3. Samples: 12482. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:23:39,625][16404] Avg episode reward: [(0, '4.339')] |
|
[2025-08-14 03:23:44,624][16404] Fps is (10 sec: 3686.4, 60 sec: 2457.7, 300 sec: 2457.7). Total num frames: 73728. Throughput: 0: 601.2. Samples: 18034. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:23:44,627][16404] Avg episode reward: [(0, '4.305')] |
|
[2025-08-14 03:23:47,896][18675] Updated weights for policy 0, policy_version 20 (0.0019) |
|
[2025-08-14 03:23:49,624][16404] Fps is (10 sec: 2867.2, 60 sec: 2457.7, 300 sec: 2457.7). Total num frames: 86016. Throughput: 0: 627.7. Samples: 21970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:23:49,625][16404] Avg episode reward: [(0, '4.320')] |
|
[2025-08-14 03:23:54,624][16404] Fps is (10 sec: 3276.8, 60 sec: 2662.5, 300 sec: 2662.5). Total num frames: 106496. Throughput: 0: 627.0. Samples: 25078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:23:54,630][16404] Avg episode reward: [(0, '4.527')] |
|
[2025-08-14 03:23:54,641][18662] Saving new best policy, reward=4.527! |
|
[2025-08-14 03:23:58,318][18675] Updated weights for policy 0, policy_version 30 (0.0013) |
|
[2025-08-14 03:23:59,625][16404] Fps is (10 sec: 3685.9, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 122880. Throughput: 0: 686.2. Samples: 30880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:23:59,626][16404] Avg episode reward: [(0, '4.783')] |
|
[2025-08-14 03:23:59,629][18662] Saving new best policy, reward=4.783! |
|
[2025-08-14 03:24:04,624][16404] Fps is (10 sec: 3276.8, 60 sec: 2785.4, 300 sec: 2785.4). Total num frames: 139264. Throughput: 0: 776.0. Samples: 35188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:24:04,628][16404] Avg episode reward: [(0, '4.640')] |
|
[2025-08-14 03:24:09,624][16404] Fps is (10 sec: 3686.9, 60 sec: 2904.5, 300 sec: 2904.5). Total num frames: 159744. Throughput: 0: 809.4. Samples: 38188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:24:09,625][16404] Avg episode reward: [(0, '4.514')] |
|
[2025-08-14 03:24:10,169][18675] Updated weights for policy 0, policy_version 40 (0.0014) |
|
[2025-08-14 03:24:14,624][16404] Fps is (10 sec: 3686.4, 60 sec: 2935.5, 300 sec: 2935.5). Total num frames: 176128. Throughput: 0: 870.1. Samples: 44060. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:24:14,627][16404] Avg episode reward: [(0, '4.496')] |
|
[2025-08-14 03:24:19,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 2961.8). Total num frames: 192512. Throughput: 0: 862.8. Samples: 48236. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:24:19,625][16404] Avg episode reward: [(0, '4.423')] |
|
[2025-08-14 03:24:22,234][18675] Updated weights for policy 0, policy_version 50 (0.0017) |
|
[2025-08-14 03:24:24,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3042.8). Total num frames: 212992. Throughput: 0: 863.7. Samples: 51348. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:24:24,625][16404] Avg episode reward: [(0, '4.412')] |
|
[2025-08-14 03:24:29,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3058.4). Total num frames: 229376. Throughput: 0: 875.3. Samples: 57424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:24:29,626][16404] Avg episode reward: [(0, '4.560')] |
|
[2025-08-14 03:24:34,368][18675] Updated weights for policy 0, policy_version 60 (0.0026) |
|
[2025-08-14 03:24:34,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3072.1). Total num frames: 245760. Throughput: 0: 880.8. Samples: 61604. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:24:34,628][16404] Avg episode reward: [(0, '4.572')] |
|
[2025-08-14 03:24:39,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3132.3). Total num frames: 266240. Throughput: 0: 881.2. Samples: 64734. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:24:39,628][16404] Avg episode reward: [(0, '4.498')] |
|
[2025-08-14 03:24:44,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3140.3). Total num frames: 282624. Throughput: 0: 886.8. Samples: 70786. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:24:44,625][16404] Avg episode reward: [(0, '4.483')] |
|
[2025-08-14 03:24:44,635][18662] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth... |
|
[2025-08-14 03:24:44,839][18662] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth |
|
[2025-08-14 03:24:45,225][18675] Updated weights for policy 0, policy_version 70 (0.0032) |
|
[2025-08-14 03:24:49,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3147.5). Total num frames: 299008. Throughput: 0: 878.2. Samples: 74706. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:24:49,627][16404] Avg episode reward: [(0, '4.625')] |
|
[2025-08-14 03:24:54,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3194.9). Total num frames: 319488. Throughput: 0: 880.3. Samples: 77802. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:24:54,625][16404] Avg episode reward: [(0, '4.324')] |
|
[2025-08-14 03:24:56,425][18675] Updated weights for policy 0, policy_version 80 (0.0014) |
|
[2025-08-14 03:24:59,630][16404] Fps is (10 sec: 3684.2, 60 sec: 3549.6, 300 sec: 3198.6). Total num frames: 335872. Throughput: 0: 884.9. Samples: 83884. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:24:59,631][16404] Avg episode reward: [(0, '4.480')] |
|
[2025-08-14 03:25:04,624][16404] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3165.1). Total num frames: 348160. Throughput: 0: 878.4. Samples: 87764. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:25:04,627][16404] Avg episode reward: [(0, '4.445')] |
|
[2025-08-14 03:25:08,883][18675] Updated weights for policy 0, policy_version 90 (0.0023) |
|
[2025-08-14 03:25:09,624][16404] Fps is (10 sec: 3278.8, 60 sec: 3481.6, 300 sec: 3205.6). Total num frames: 368640. Throughput: 0: 876.4. Samples: 90786. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:25:09,627][16404] Avg episode reward: [(0, '4.524')] |
|
[2025-08-14 03:25:14,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3242.7). Total num frames: 389120. Throughput: 0: 876.6. Samples: 96870. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:25:14,625][16404] Avg episode reward: [(0, '4.382')] |
|
[2025-08-14 03:25:19,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3211.3). Total num frames: 401408. Throughput: 0: 870.8. Samples: 100792. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:25:19,628][16404] Avg episode reward: [(0, '4.394')] |
|
[2025-08-14 03:25:21,233][18675] Updated weights for policy 0, policy_version 100 (0.0026) |
|
[2025-08-14 03:25:24,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3245.3). Total num frames: 421888. Throughput: 0: 865.5. Samples: 103682. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:25:24,628][16404] Avg episode reward: [(0, '4.562')] |
|
[2025-08-14 03:25:29,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3246.5). Total num frames: 438272. Throughput: 0: 865.1. Samples: 109714. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:25:29,626][16404] Avg episode reward: [(0, '4.765')] |
|
[2025-08-14 03:25:33,345][18675] Updated weights for policy 0, policy_version 110 (0.0029) |
|
[2025-08-14 03:25:34,624][16404] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3218.3). Total num frames: 450560. Throughput: 0: 864.2. Samples: 113596. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:25:34,626][16404] Avg episode reward: [(0, '4.776')] |
|
[2025-08-14 03:25:39,624][16404] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 3276.8). Total num frames: 475136. Throughput: 0: 863.4. Samples: 116654. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:25:39,630][16404] Avg episode reward: [(0, '4.610')] |
|
[2025-08-14 03:25:43,394][18675] Updated weights for policy 0, policy_version 120 (0.0034) |
|
[2025-08-14 03:25:44,624][16404] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3304.1). Total num frames: 495616. Throughput: 0: 865.9. Samples: 122844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:25:44,633][16404] Avg episode reward: [(0, '4.618')] |
|
[2025-08-14 03:25:49,624][16404] Fps is (10 sec: 3276.9, 60 sec: 3481.6, 300 sec: 3276.8). Total num frames: 507904. Throughput: 0: 878.8. Samples: 127310. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:25:49,633][16404] Avg episode reward: [(0, '4.647')] |
|
[2025-08-14 03:25:54,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3302.4). Total num frames: 528384. Throughput: 0: 881.5. Samples: 130454. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:25:54,625][16404] Avg episode reward: [(0, '4.608')] |
|
[2025-08-14 03:25:54,842][18675] Updated weights for policy 0, policy_version 130 (0.0024) |
|
[2025-08-14 03:25:59,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3550.2, 300 sec: 3326.5). Total num frames: 548864. Throughput: 0: 896.1. Samples: 137196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:25:59,625][16404] Avg episode reward: [(0, '4.612')] |
|
[2025-08-14 03:26:04,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3325.0). Total num frames: 565248. Throughput: 0: 911.5. Samples: 141810. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:26:04,626][16404] Avg episode reward: [(0, '4.621')] |
|
[2025-08-14 03:26:05,910][18675] Updated weights for policy 0, policy_version 140 (0.0031) |
|
[2025-08-14 03:26:09,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3370.5). Total num frames: 589824. Throughput: 0: 919.1. Samples: 145040. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:26:09,626][16404] Avg episode reward: [(0, '4.535')] |
|
[2025-08-14 03:26:14,627][16404] Fps is (10 sec: 4504.3, 60 sec: 3686.2, 300 sec: 3390.5). Total num frames: 610304. Throughput: 0: 933.9. Samples: 151744. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:26:14,629][16404] Avg episode reward: [(0, '4.919')] |
|
[2025-08-14 03:26:14,645][18662] Saving new best policy, reward=4.919! |
|
[2025-08-14 03:26:15,669][18675] Updated weights for policy 0, policy_version 150 (0.0014) |
|
[2025-08-14 03:26:19,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3365.4). Total num frames: 622592. Throughput: 0: 950.4. Samples: 156362. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:26:19,629][16404] Avg episode reward: [(0, '5.024')] |
|
[2025-08-14 03:26:19,634][18662] Saving new best policy, reward=5.024! |
|
[2025-08-14 03:26:24,624][16404] Fps is (10 sec: 3687.6, 60 sec: 3754.7, 300 sec: 3406.2). Total num frames: 647168. Throughput: 0: 953.7. Samples: 159568. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:26:24,625][16404] Avg episode reward: [(0, '4.621')] |
|
[2025-08-14 03:26:26,090][18675] Updated weights for policy 0, policy_version 160 (0.0035) |
|
[2025-08-14 03:26:29,624][16404] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3423.9). Total num frames: 667648. Throughput: 0: 968.1. Samples: 166410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:26:29,629][16404] Avg episode reward: [(0, '4.415')] |
|
[2025-08-14 03:26:34,624][16404] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3420.2). Total num frames: 684032. Throughput: 0: 973.0. Samples: 171094. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:26:34,629][16404] Avg episode reward: [(0, '4.508')] |
|
[2025-08-14 03:26:36,935][18675] Updated weights for policy 0, policy_version 170 (0.0015) |
|
[2025-08-14 03:26:39,624][16404] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3436.7). Total num frames: 704512. Throughput: 0: 977.4. Samples: 174438. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:26:39,629][16404] Avg episode reward: [(0, '4.731')] |
|
[2025-08-14 03:26:44,624][16404] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3471.9). Total num frames: 729088. Throughput: 0: 977.9. Samples: 181200. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 03:26:44,626][16404] Avg episode reward: [(0, '5.226')] |
|
[2025-08-14 03:26:44,632][18662] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000178_729088.pth... |
|
[2025-08-14 03:26:44,815][18662] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000178_729088.pth |
|
[2025-08-14 03:26:44,846][18662] Saving new best policy, reward=5.226! |
|
[2025-08-14 03:26:47,782][18675] Updated weights for policy 0, policy_version 180 (0.0013) |
|
[2025-08-14 03:26:49,624][16404] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3448.3). Total num frames: 741376. Throughput: 0: 975.3. Samples: 185700. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:26:49,629][16404] Avg episode reward: [(0, '5.523')] |
|
[2025-08-14 03:26:49,633][18662] Saving new best policy, reward=5.523! |
|
[2025-08-14 03:26:54,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3463.0). Total num frames: 761856. Throughput: 0: 975.3. Samples: 188930. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:26:54,626][16404] Avg episode reward: [(0, '5.553')] |
|
[2025-08-14 03:26:54,648][18662] Saving new best policy, reward=5.553! |
|
[2025-08-14 03:26:57,424][18675] Updated weights for policy 0, policy_version 190 (0.0014) |
|
[2025-08-14 03:26:59,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3477.1). Total num frames: 782336. Throughput: 0: 975.8. Samples: 195654. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:26:59,629][16404] Avg episode reward: [(0, '5.546')] |
|
[2025-08-14 03:27:04,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3472.7). Total num frames: 798720. Throughput: 0: 973.0. Samples: 200146. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:27:04,625][16404] Avg episode reward: [(0, '5.327')] |
|
[2025-08-14 03:27:08,521][18675] Updated weights for policy 0, policy_version 200 (0.0020) |
|
[2025-08-14 03:27:09,624][16404] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3503.4). Total num frames: 823296. Throughput: 0: 976.9. Samples: 203530. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:27:09,629][16404] Avg episode reward: [(0, '5.215')] |
|
[2025-08-14 03:27:14,626][16404] Fps is (10 sec: 4504.7, 60 sec: 3891.3, 300 sec: 3515.7). Total num frames: 843776. Throughput: 0: 976.8. Samples: 210366. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:27:14,627][16404] Avg episode reward: [(0, '5.600')] |
|
[2025-08-14 03:27:14,637][18662] Saving new best policy, reward=5.600! |
|
[2025-08-14 03:27:19,529][18675] Updated weights for policy 0, policy_version 210 (0.0016) |
|
[2025-08-14 03:27:19,624][16404] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3510.9). Total num frames: 860160. Throughput: 0: 970.8. Samples: 214780. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:27:19,629][16404] Avg episode reward: [(0, '5.546')] |
|
[2025-08-14 03:27:24,624][16404] Fps is (10 sec: 3687.2, 60 sec: 3891.2, 300 sec: 3522.6). Total num frames: 880640. Throughput: 0: 968.0. Samples: 218000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:27:24,625][16404] Avg episode reward: [(0, '5.030')] |
|
[2025-08-14 03:27:28,860][18675] Updated weights for policy 0, policy_version 220 (0.0025) |
|
[2025-08-14 03:27:29,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3533.8). Total num frames: 901120. Throughput: 0: 966.4. Samples: 224688. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:27:29,625][16404] Avg episode reward: [(0, '5.083')] |
|
[2025-08-14 03:27:34,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3528.9). Total num frames: 917504. Throughput: 0: 968.3. Samples: 229272. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:27:34,627][16404] Avg episode reward: [(0, '5.045')] |
|
[2025-08-14 03:27:39,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3539.6). Total num frames: 937984. Throughput: 0: 971.3. Samples: 232638. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:27:39,629][16404] Avg episode reward: [(0, '5.140')] |
|
[2025-08-14 03:27:40,019][18675] Updated weights for policy 0, policy_version 230 (0.0034) |
|
[2025-08-14 03:27:44,625][16404] Fps is (10 sec: 4095.6, 60 sec: 3822.9, 300 sec: 3549.9). Total num frames: 958464. Throughput: 0: 969.8. Samples: 239298. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:27:44,626][16404] Avg episode reward: [(0, '5.473')] |
|
[2025-08-14 03:27:49,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3544.9). Total num frames: 974848. Throughput: 0: 970.8. Samples: 243834. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:27:49,625][16404] Avg episode reward: [(0, '5.375')] |
|
[2025-08-14 03:27:51,181][18675] Updated weights for policy 0, policy_version 240 (0.0016) |
|
[2025-08-14 03:27:54,624][16404] Fps is (10 sec: 3686.7, 60 sec: 3891.2, 300 sec: 3554.8). Total num frames: 995328. Throughput: 0: 968.0. Samples: 247090. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-14 03:27:54,627][16404] Avg episode reward: [(0, '5.741')] |
|
[2025-08-14 03:27:54,635][18662] Saving new best policy, reward=5.741! |
|
[2025-08-14 03:27:59,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3564.3). Total num frames: 1015808. Throughput: 0: 964.3. Samples: 253756. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:27:59,628][16404] Avg episode reward: [(0, '5.994')] |
|
[2025-08-14 03:27:59,631][18662] Saving new best policy, reward=5.994! |
|
[2025-08-14 03:28:01,403][18675] Updated weights for policy 0, policy_version 250 (0.0013) |
|
[2025-08-14 03:28:04,624][16404] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3559.3). Total num frames: 1032192. Throughput: 0: 966.7. Samples: 258284. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:28:04,626][16404] Avg episode reward: [(0, '6.146')] |
|
[2025-08-14 03:28:04,634][18662] Saving new best policy, reward=6.146! |
|
[2025-08-14 03:28:09,626][16404] Fps is (10 sec: 3685.6, 60 sec: 3822.8, 300 sec: 3568.4). Total num frames: 1052672. Throughput: 0: 968.4. Samples: 261580. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:28:09,627][16404] Avg episode reward: [(0, '6.265')] |
|
[2025-08-14 03:28:09,632][18662] Saving new best policy, reward=6.265! |
|
[2025-08-14 03:28:11,739][18675] Updated weights for policy 0, policy_version 260 (0.0022) |
|
[2025-08-14 03:28:14,627][16404] Fps is (10 sec: 4094.8, 60 sec: 3822.9, 300 sec: 3637.8). Total num frames: 1073152. Throughput: 0: 965.1. Samples: 268122. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:28:14,629][16404] Avg episode reward: [(0, '6.293')] |
|
[2025-08-14 03:28:14,648][18662] Saving new best policy, reward=6.293! |
|
[2025-08-14 03:28:19,624][16404] Fps is (10 sec: 3687.1, 60 sec: 3822.9, 300 sec: 3665.6). Total num frames: 1089536. Throughput: 0: 964.4. Samples: 272668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:28:19,625][16404] Avg episode reward: [(0, '6.677')] |
|
[2025-08-14 03:28:19,629][18662] Saving new best policy, reward=6.677! |
|
[2025-08-14 03:28:23,036][18675] Updated weights for policy 0, policy_version 270 (0.0014) |
|
[2025-08-14 03:28:24,624][16404] Fps is (10 sec: 3687.5, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 1110016. Throughput: 0: 960.7. Samples: 275868. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:28:24,630][16404] Avg episode reward: [(0, '6.649')] |
|
[2025-08-14 03:28:29,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 1130496. Throughput: 0: 960.9. Samples: 282536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:28:29,629][16404] Avg episode reward: [(0, '6.954')] |
|
[2025-08-14 03:28:29,690][18662] Saving new best policy, reward=6.954! |
|
[2025-08-14 03:28:34,085][18675] Updated weights for policy 0, policy_version 280 (0.0016) |
|
[2025-08-14 03:28:34,625][16404] Fps is (10 sec: 3686.0, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 1146880. Throughput: 0: 960.9. Samples: 287074. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:28:34,628][16404] Avg episode reward: [(0, '6.925')] |
|
[2025-08-14 03:28:39,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3721.1). Total num frames: 1171456. Throughput: 0: 962.2. Samples: 290388. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-14 03:28:39,628][16404] Avg episode reward: [(0, '7.064')] |
|
[2025-08-14 03:28:39,631][18662] Saving new best policy, reward=7.064! |
|
[2025-08-14 03:28:43,242][18675] Updated weights for policy 0, policy_version 290 (0.0017) |
|
[2025-08-14 03:28:44,631][16404] Fps is (10 sec: 4502.9, 60 sec: 3890.8, 300 sec: 3748.8). Total num frames: 1191936. Throughput: 0: 963.4. Samples: 297116. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:28:44,632][16404] Avg episode reward: [(0, '7.441')] |
|
[2025-08-14 03:28:44,645][18662] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000291_1191936.pth... |
|
[2025-08-14 03:28:44,777][18662] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000291_1191936.pth |
|
[2025-08-14 03:28:44,801][18662] Saving new best policy, reward=7.441! |
|
[2025-08-14 03:28:49,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 1204224. Throughput: 0: 961.1. Samples: 301534. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:28:49,627][16404] Avg episode reward: [(0, '6.985')] |
|
[2025-08-14 03:28:54,596][18675] Updated weights for policy 0, policy_version 300 (0.0013) |
|
[2025-08-14 03:28:54,624][16404] Fps is (10 sec: 3689.0, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 1228800. Throughput: 0: 962.0. Samples: 304870. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-14 03:28:54,625][16404] Avg episode reward: [(0, '6.914')] |
|
[2025-08-14 03:28:59,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 1245184. Throughput: 0: 958.5. Samples: 311252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:28:59,627][16404] Avg episode reward: [(0, '6.679')] |
|
[2025-08-14 03:29:04,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3735.0). Total num frames: 1261568. Throughput: 0: 950.4. Samples: 315438. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:29:04,625][16404] Avg episode reward: [(0, '7.108')] |
|
[2025-08-14 03:29:06,297][18675] Updated weights for policy 0, policy_version 310 (0.0024) |
|
[2025-08-14 03:29:09,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3748.9). Total num frames: 1282048. Throughput: 0: 949.5. Samples: 318594. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-14 03:29:09,629][16404] Avg episode reward: [(0, '7.338')] |
|
[2025-08-14 03:29:14,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3823.1, 300 sec: 3762.8). Total num frames: 1302528. Throughput: 0: 949.0. Samples: 325240. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:29:14,628][16404] Avg episode reward: [(0, '7.779')] |
|
[2025-08-14 03:29:14,635][18662] Saving new best policy, reward=7.779! |
|
[2025-08-14 03:29:16,620][18675] Updated weights for policy 0, policy_version 320 (0.0016) |
|
[2025-08-14 03:29:19,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 1318912. Throughput: 0: 952.1. Samples: 329918. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:29:19,628][16404] Avg episode reward: [(0, '8.063')] |
|
[2025-08-14 03:29:19,634][18662] Saving new best policy, reward=8.063! |
|
[2025-08-14 03:29:24,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 1339392. Throughput: 0: 952.8. Samples: 333262. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:29:24,625][16404] Avg episode reward: [(0, '7.929')] |
|
[2025-08-14 03:29:26,564][18675] Updated weights for policy 0, policy_version 330 (0.0017) |
|
[2025-08-14 03:29:29,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 1359872. Throughput: 0: 950.7. Samples: 339890. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-14 03:29:29,628][16404] Avg episode reward: [(0, '7.581')] |
|
[2025-08-14 03:29:34,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3762.8). Total num frames: 1376256. Throughput: 0: 953.9. Samples: 344460. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:29:34,627][16404] Avg episode reward: [(0, '8.421')] |
|
[2025-08-14 03:29:34,632][18662] Saving new best policy, reward=8.421! |
|
[2025-08-14 03:29:37,636][18675] Updated weights for policy 0, policy_version 340 (0.0022) |
|
[2025-08-14 03:29:39,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 1400832. Throughput: 0: 954.3. Samples: 347812. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:29:39,626][16404] Avg episode reward: [(0, '8.912')] |
|
[2025-08-14 03:29:39,629][18662] Saving new best policy, reward=8.912! |
|
[2025-08-14 03:29:44,624][16404] Fps is (10 sec: 4505.6, 60 sec: 3823.4, 300 sec: 3804.4). Total num frames: 1421312. Throughput: 0: 961.6. Samples: 354526. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:29:44,626][16404] Avg episode reward: [(0, '9.247')] |
|
[2025-08-14 03:29:44,634][18662] Saving new best policy, reward=9.247! |
|
[2025-08-14 03:29:48,696][18675] Updated weights for policy 0, policy_version 350 (0.0019) |
|
[2025-08-14 03:29:49,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3790.5). Total num frames: 1437696. Throughput: 0: 968.8. Samples: 359036. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-14 03:29:49,628][16404] Avg episode reward: [(0, '9.346')] |
|
[2025-08-14 03:29:49,630][18662] Saving new best policy, reward=9.346! |
|
[2025-08-14 03:29:54,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.5). Total num frames: 1458176. Throughput: 0: 973.5. Samples: 362402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:29:54,628][16404] Avg episode reward: [(0, '10.034')] |
|
[2025-08-14 03:29:54,634][18662] Saving new best policy, reward=10.034! |
|
[2025-08-14 03:29:58,041][18675] Updated weights for policy 0, policy_version 360 (0.0023) |
|
[2025-08-14 03:29:59,630][16404] Fps is (10 sec: 4093.5, 60 sec: 3890.8, 300 sec: 3832.1). Total num frames: 1478656. Throughput: 0: 972.0. Samples: 368988. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:29:59,631][16404] Avg episode reward: [(0, '10.253')] |
|
[2025-08-14 03:29:59,633][18662] Saving new best policy, reward=10.253! |
|
[2025-08-14 03:30:04,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 1490944. Throughput: 0: 963.9. Samples: 373294. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:30:04,625][16404] Avg episode reward: [(0, '10.068')] |
|
[2025-08-14 03:30:09,515][18675] Updated weights for policy 0, policy_version 370 (0.0029) |
|
[2025-08-14 03:30:09,624][16404] Fps is (10 sec: 3688.6, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 1515520. Throughput: 0: 961.9. Samples: 376546. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-14 03:30:09,626][16404] Avg episode reward: [(0, '9.322')] |
|
[2025-08-14 03:30:14,631][16404] Fps is (10 sec: 4093.1, 60 sec: 3822.5, 300 sec: 3832.1). Total num frames: 1531904. Throughput: 0: 957.8. Samples: 382996. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:30:14,637][16404] Avg episode reward: [(0, '10.086')] |
|
[2025-08-14 03:30:19,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1548288. Throughput: 0: 953.7. Samples: 387378. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:30:19,629][16404] Avg episode reward: [(0, '10.381')] |
|
[2025-08-14 03:30:19,633][18662] Saving new best policy, reward=10.381! |
|
[2025-08-14 03:30:21,000][18675] Updated weights for policy 0, policy_version 380 (0.0023) |
|
[2025-08-14 03:30:24,624][16404] Fps is (10 sec: 3689.0, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 1568768. Throughput: 0: 951.1. Samples: 390612. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-14 03:30:24,629][16404] Avg episode reward: [(0, '10.994')] |
|
[2025-08-14 03:30:24,639][18662] Saving new best policy, reward=10.994! |
|
[2025-08-14 03:30:29,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 1589248. Throughput: 0: 942.3. Samples: 396930. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:30:29,627][16404] Avg episode reward: [(0, '12.240')] |
|
[2025-08-14 03:30:29,629][18662] Saving new best policy, reward=12.240! |
|
[2025-08-14 03:30:32,274][18675] Updated weights for policy 0, policy_version 390 (0.0023) |
|
[2025-08-14 03:30:34,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 1601536. Throughput: 0: 935.8. Samples: 401148. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:30:34,629][16404] Avg episode reward: [(0, '12.592')] |
|
[2025-08-14 03:30:34,716][18662] Saving new best policy, reward=12.592! |
|
[2025-08-14 03:30:39,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 1626112. Throughput: 0: 930.6. Samples: 404278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:30:39,629][16404] Avg episode reward: [(0, '14.022')] |
|
[2025-08-14 03:30:39,634][18662] Saving new best policy, reward=14.022! |
|
[2025-08-14 03:30:42,586][18675] Updated weights for policy 0, policy_version 400 (0.0016) |
|
[2025-08-14 03:30:44,629][16404] Fps is (10 sec: 4093.9, 60 sec: 3686.1, 300 sec: 3846.0). Total num frames: 1642496. Throughput: 0: 924.4. Samples: 410586. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:30:44,633][16404] Avg episode reward: [(0, '14.199')] |
|
[2025-08-14 03:30:44,650][18662] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000401_1642496.pth... |
|
[2025-08-14 03:30:44,788][18662] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000401_1642496.pth |
|
[2025-08-14 03:30:44,808][18662] Saving new best policy, reward=14.199! |
|
[2025-08-14 03:30:49,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3832.2). Total num frames: 1658880. Throughput: 0: 919.6. Samples: 414674. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:30:49,629][16404] Avg episode reward: [(0, '14.158')] |
|
[2025-08-14 03:30:54,418][18675] Updated weights for policy 0, policy_version 410 (0.0030) |
|
[2025-08-14 03:30:54,624][16404] Fps is (10 sec: 3688.3, 60 sec: 3686.4, 300 sec: 3832.2). Total num frames: 1679360. Throughput: 0: 917.6. Samples: 417840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:30:54,629][16404] Avg episode reward: [(0, '13.893')] |
|
[2025-08-14 03:30:59,628][16404] Fps is (10 sec: 3684.9, 60 sec: 3618.3, 300 sec: 3832.1). Total num frames: 1695744. Throughput: 0: 910.3. Samples: 423958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:30:59,634][16404] Avg episode reward: [(0, '13.696')] |
|
[2025-08-14 03:31:04,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 1712128. Throughput: 0: 905.9. Samples: 428142. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:31:04,628][16404] Avg episode reward: [(0, '14.040')] |
|
[2025-08-14 03:31:06,396][18675] Updated weights for policy 0, policy_version 420 (0.0028) |
|
[2025-08-14 03:31:09,627][16404] Fps is (10 sec: 3686.8, 60 sec: 3618.0, 300 sec: 3804.4). Total num frames: 1732608. Throughput: 0: 903.2. Samples: 431258. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:31:09,632][16404] Avg episode reward: [(0, '13.896')] |
|
[2025-08-14 03:31:14,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3686.8, 300 sec: 3832.2). Total num frames: 1753088. Throughput: 0: 906.0. Samples: 437700. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:31:14,627][16404] Avg episode reward: [(0, '14.401')] |
|
[2025-08-14 03:31:14,637][18662] Saving new best policy, reward=14.401! |
|
[2025-08-14 03:31:17,305][18675] Updated weights for policy 0, policy_version 430 (0.0017) |
|
[2025-08-14 03:31:19,624][16404] Fps is (10 sec: 3277.7, 60 sec: 3618.1, 300 sec: 3790.5). Total num frames: 1765376. Throughput: 0: 904.2. Samples: 441838. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:31:19,629][16404] Avg episode reward: [(0, '14.474')] |
|
[2025-08-14 03:31:19,634][18662] Saving new best policy, reward=14.474! |
|
[2025-08-14 03:31:24,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3790.5). Total num frames: 1785856. Throughput: 0: 902.7. Samples: 444900. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:31:24,626][16404] Avg episode reward: [(0, '13.528')] |
|
[2025-08-14 03:31:28,010][18675] Updated weights for policy 0, policy_version 440 (0.0028) |
|
[2025-08-14 03:31:29,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3804.4). Total num frames: 1806336. Throughput: 0: 900.5. Samples: 451102. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:31:29,626][16404] Avg episode reward: [(0, '14.160')] |
|
[2025-08-14 03:31:34,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3776.7). Total num frames: 1818624. Throughput: 0: 899.0. Samples: 455130. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:31:34,629][16404] Avg episode reward: [(0, '14.656')] |
|
[2025-08-14 03:31:34,643][18662] Saving new best policy, reward=14.656! |
|
[2025-08-14 03:31:39,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3762.8). Total num frames: 1839104. Throughput: 0: 892.4. Samples: 457998. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:31:39,629][16404] Avg episode reward: [(0, '14.995')] |
|
[2025-08-14 03:31:39,634][18662] Saving new best policy, reward=14.995! |
|
[2025-08-14 03:31:40,260][18675] Updated weights for policy 0, policy_version 450 (0.0027) |
|
[2025-08-14 03:31:44,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3618.4, 300 sec: 3790.5). Total num frames: 1859584. Throughput: 0: 890.8. Samples: 464040. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:31:44,631][16404] Avg episode reward: [(0, '15.920')] |
|
[2025-08-14 03:31:44,641][18662] Saving new best policy, reward=15.920! |
|
[2025-08-14 03:31:49,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3762.8). Total num frames: 1871872. Throughput: 0: 891.3. Samples: 468250. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:31:49,625][16404] Avg episode reward: [(0, '17.166')] |
|
[2025-08-14 03:31:49,628][18662] Saving new best policy, reward=17.166! |
|
[2025-08-14 03:31:52,513][18675] Updated weights for policy 0, policy_version 460 (0.0013) |
|
[2025-08-14 03:31:54,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3762.8). Total num frames: 1892352. Throughput: 0: 885.2. Samples: 471088. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-08-14 03:31:54,625][16404] Avg episode reward: [(0, '17.051')] |
|
[2025-08-14 03:31:59,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3618.4, 300 sec: 3776.7). Total num frames: 1912832. Throughput: 0: 876.8. Samples: 477156. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:31:59,628][16404] Avg episode reward: [(0, '16.637')] |
|
[2025-08-14 03:32:04,624][16404] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3721.1). Total num frames: 1921024. Throughput: 0: 880.5. Samples: 481460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:32:04,626][16404] Avg episode reward: [(0, '16.127')] |
|
[2025-08-14 03:32:04,784][18675] Updated weights for policy 0, policy_version 470 (0.0024) |
|
[2025-08-14 03:32:09,624][16404] Fps is (10 sec: 2867.2, 60 sec: 3481.8, 300 sec: 3721.1). Total num frames: 1941504. Throughput: 0: 871.5. Samples: 484118. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:32:09,633][16404] Avg episode reward: [(0, '14.798')] |
|
[2025-08-14 03:32:14,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3735.0). Total num frames: 1961984. Throughput: 0: 871.5. Samples: 490320. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-08-14 03:32:14,629][16404] Avg episode reward: [(0, '14.981')] |
|
[2025-08-14 03:32:14,693][18675] Updated weights for policy 0, policy_version 480 (0.0023) |
|
[2025-08-14 03:32:19,627][16404] Fps is (10 sec: 3685.3, 60 sec: 3549.7, 300 sec: 3721.1). Total num frames: 1978368. Throughput: 0: 883.6. Samples: 494896. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:32:19,629][16404] Avg episode reward: [(0, '14.663')] |
|
[2025-08-14 03:32:24,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3707.2). Total num frames: 1994752. Throughput: 0: 879.6. Samples: 497578. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:32:24,629][16404] Avg episode reward: [(0, '14.468')] |
|
[2025-08-14 03:32:26,941][18675] Updated weights for policy 0, policy_version 490 (0.0037) |
|
[2025-08-14 03:32:29,624][16404] Fps is (10 sec: 3687.5, 60 sec: 3481.6, 300 sec: 3721.1). Total num frames: 2015232. Throughput: 0: 877.5. Samples: 503526. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:32:29,629][16404] Avg episode reward: [(0, '14.725')] |
|
[2025-08-14 03:32:34,630][16404] Fps is (10 sec: 3684.2, 60 sec: 3549.5, 300 sec: 3707.2). Total num frames: 2031616. Throughput: 0: 883.1. Samples: 507996. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:32:34,633][16404] Avg episode reward: [(0, '15.220')] |
|
[2025-08-14 03:32:38,823][18675] Updated weights for policy 0, policy_version 500 (0.0019) |
|
[2025-08-14 03:32:39,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3693.4). Total num frames: 2048000. Throughput: 0: 877.7. Samples: 510584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:32:39,625][16404] Avg episode reward: [(0, '15.317')] |
|
[2025-08-14 03:32:44,624][16404] Fps is (10 sec: 3688.6, 60 sec: 3481.6, 300 sec: 3707.2). Total num frames: 2068480. Throughput: 0: 881.0. Samples: 516802. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:32:44,625][16404] Avg episode reward: [(0, '16.601')] |
|
[2025-08-14 03:32:44,663][18662] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000506_2072576.pth... |
|
[2025-08-14 03:32:44,805][18662] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000506_2072576.pth |
|
[2025-08-14 03:32:49,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3693.3). Total num frames: 2084864. Throughput: 0: 888.5. Samples: 521444. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:32:49,625][16404] Avg episode reward: [(0, '16.600')] |
|
[2025-08-14 03:32:50,748][18675] Updated weights for policy 0, policy_version 510 (0.0033) |
|
[2025-08-14 03:32:54,627][16404] Fps is (10 sec: 3685.1, 60 sec: 3549.7, 300 sec: 3693.3). Total num frames: 2105344. Throughput: 0: 888.0. Samples: 524080. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:32:54,629][16404] Avg episode reward: [(0, '17.259')] |
|
[2025-08-14 03:32:54,642][18662] Saving new best policy, reward=17.259! |
|
[2025-08-14 03:32:59,626][16404] Fps is (10 sec: 4095.2, 60 sec: 3549.7, 300 sec: 3707.2). Total num frames: 2125824. Throughput: 0: 898.9. Samples: 530774. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:32:59,627][16404] Avg episode reward: [(0, '16.970')] |
|
[2025-08-14 03:33:00,081][18675] Updated weights for policy 0, policy_version 520 (0.0026) |
|
[2025-08-14 03:33:04,624][16404] Fps is (10 sec: 3687.7, 60 sec: 3686.4, 300 sec: 3693.4). Total num frames: 2142208. Throughput: 0: 913.0. Samples: 535978. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:33:04,628][16404] Avg episode reward: [(0, '17.251')] |
|
[2025-08-14 03:33:09,624][16404] Fps is (10 sec: 3687.1, 60 sec: 3686.4, 300 sec: 3693.4). Total num frames: 2162688. Throughput: 0: 912.1. Samples: 538622. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:33:09,625][16404] Avg episode reward: [(0, '17.216')] |
|
[2025-08-14 03:33:11,187][18675] Updated weights for policy 0, policy_version 530 (0.0042) |
|
[2025-08-14 03:33:14,624][16404] Fps is (10 sec: 4095.9, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 2183168. Throughput: 0: 930.0. Samples: 545376. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:33:14,626][16404] Avg episode reward: [(0, '18.002')] |
|
[2025-08-14 03:33:14,631][18662] Saving new best policy, reward=18.002! |
|
[2025-08-14 03:33:19,624][16404] Fps is (10 sec: 3686.3, 60 sec: 3686.6, 300 sec: 3693.3). Total num frames: 2199552. Throughput: 0: 943.6. Samples: 550454. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:33:19,626][16404] Avg episode reward: [(0, '19.825')] |
|
[2025-08-14 03:33:19,630][18662] Saving new best policy, reward=19.825! |
|
[2025-08-14 03:33:22,456][18675] Updated weights for policy 0, policy_version 540 (0.0017) |
|
[2025-08-14 03:33:24,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3754.6, 300 sec: 3693.3). Total num frames: 2220032. Throughput: 0: 942.6. Samples: 553000. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:33:24,626][16404] Avg episode reward: [(0, '19.464')] |
|
[2025-08-14 03:33:29,624][16404] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 2240512. Throughput: 0: 950.1. Samples: 559558. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:33:29,625][16404] Avg episode reward: [(0, '19.321')] |
|
[2025-08-14 03:33:32,584][18675] Updated weights for policy 0, policy_version 550 (0.0034) |
|
[2025-08-14 03:33:34,624][16404] Fps is (10 sec: 3686.5, 60 sec: 3755.0, 300 sec: 3679.5). Total num frames: 2256896. Throughput: 0: 960.7. Samples: 564674. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:33:34,631][16404] Avg episode reward: [(0, '18.597')] |
|
[2025-08-14 03:33:39,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3679.5). Total num frames: 2277376. Throughput: 0: 960.7. Samples: 567308. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:33:39,625][16404] Avg episode reward: [(0, '18.214')] |
|
[2025-08-14 03:33:42,844][18675] Updated weights for policy 0, policy_version 560 (0.0027) |
|
[2025-08-14 03:33:44,624][16404] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3721.1). Total num frames: 2301952. Throughput: 0: 963.2. Samples: 574114. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:33:44,628][16404] Avg episode reward: [(0, '17.881')] |
|
[2025-08-14 03:33:49,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3679.5). Total num frames: 2314240. Throughput: 0: 969.2. Samples: 579592. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:33:49,627][16404] Avg episode reward: [(0, '17.766')] |
|
[2025-08-14 03:33:53,692][18675] Updated weights for policy 0, policy_version 570 (0.0021) |
|
[2025-08-14 03:33:54,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3891.4, 300 sec: 3707.2). Total num frames: 2338816. Throughput: 0: 970.7. Samples: 582302. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:33:54,626][16404] Avg episode reward: [(0, '18.240')] |
|
[2025-08-14 03:33:59,624][16404] Fps is (10 sec: 4505.6, 60 sec: 3891.3, 300 sec: 3721.1). Total num frames: 2359296. Throughput: 0: 973.3. Samples: 589172. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:33:59,629][16404] Avg episode reward: [(0, '18.537')] |
|
[2025-08-14 03:34:03,869][18675] Updated weights for policy 0, policy_version 580 (0.0021) |
|
[2025-08-14 03:34:04,624][16404] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 2375680. Throughput: 0: 977.1. Samples: 594422. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-14 03:34:04,625][16404] Avg episode reward: [(0, '17.932')] |
|
[2025-08-14 03:34:09,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 2396160. Throughput: 0: 981.2. Samples: 597154. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:34:09,625][16404] Avg episode reward: [(0, '19.273')] |
|
[2025-08-14 03:34:13,626][18675] Updated weights for policy 0, policy_version 590 (0.0017) |
|
[2025-08-14 03:34:14,624][16404] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3735.0). Total num frames: 2420736. Throughput: 0: 985.9. Samples: 603922. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:34:14,625][16404] Avg episode reward: [(0, '18.542')] |
|
[2025-08-14 03:34:19,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 2433024. Throughput: 0: 990.7. Samples: 609254. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:34:19,628][16404] Avg episode reward: [(0, '17.579')] |
|
[2025-08-14 03:34:24,470][18675] Updated weights for policy 0, policy_version 600 (0.0034) |
|
[2025-08-14 03:34:24,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3721.1). Total num frames: 2457600. Throughput: 0: 993.9. Samples: 612032. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:34:24,631][16404] Avg episode reward: [(0, '17.762')] |
|
[2025-08-14 03:34:29,624][16404] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3735.0). Total num frames: 2478080. Throughput: 0: 996.1. Samples: 618938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:34:29,625][16404] Avg episode reward: [(0, '18.047')] |
|
[2025-08-14 03:34:34,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3707.2). Total num frames: 2494464. Throughput: 0: 988.1. Samples: 624058. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:34:34,627][16404] Avg episode reward: [(0, '17.882')] |
|
[2025-08-14 03:34:35,381][18675] Updated weights for policy 0, policy_version 610 (0.0017) |
|
[2025-08-14 03:34:39,624][16404] Fps is (10 sec: 3686.3, 60 sec: 3959.4, 300 sec: 3707.2). Total num frames: 2514944. Throughput: 0: 992.0. Samples: 626942. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:34:39,628][16404] Avg episode reward: [(0, '19.788')] |
|
[2025-08-14 03:34:44,474][18675] Updated weights for policy 0, policy_version 620 (0.0013) |
|
[2025-08-14 03:34:44,624][16404] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3735.0). Total num frames: 2539520. Throughput: 0: 992.6. Samples: 633840. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:34:44,625][16404] Avg episode reward: [(0, '21.021')] |
|
[2025-08-14 03:34:44,634][18662] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000620_2539520.pth... |
|
[2025-08-14 03:34:44,752][18662] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000620_2539520.pth |
|
[2025-08-14 03:34:44,770][18662] Saving new best policy, reward=21.021! |
|
[2025-08-14 03:34:49,624][16404] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3707.2). Total num frames: 2551808. Throughput: 0: 987.2. Samples: 638844. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:34:49,628][16404] Avg episode reward: [(0, '20.492')] |
|
[2025-08-14 03:34:54,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3721.2). Total num frames: 2576384. Throughput: 0: 992.4. Samples: 641814. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:34:54,625][16404] Avg episode reward: [(0, '20.801')] |
|
[2025-08-14 03:34:55,281][18675] Updated weights for policy 0, policy_version 630 (0.0028) |
|
[2025-08-14 03:34:59,624][16404] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3748.9). Total num frames: 2596864. Throughput: 0: 993.7. Samples: 648638. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:34:59,625][16404] Avg episode reward: [(0, '21.153')] |
|
[2025-08-14 03:34:59,630][18662] Saving new best policy, reward=21.153! |
|
[2025-08-14 03:35:04,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3721.1). Total num frames: 2613248. Throughput: 0: 982.7. Samples: 653474. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:35:04,625][16404] Avg episode reward: [(0, '19.405')] |
|
[2025-08-14 03:35:06,463][18675] Updated weights for policy 0, policy_version 640 (0.0028) |
|
[2025-08-14 03:35:09,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3735.1). Total num frames: 2633728. Throughput: 0: 987.1. Samples: 656452. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:35:09,629][16404] Avg episode reward: [(0, '19.155')] |
|
[2025-08-14 03:35:14,624][16404] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3762.8). Total num frames: 2658304. Throughput: 0: 987.0. Samples: 663352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:35:14,629][16404] Avg episode reward: [(0, '19.149')] |
|
[2025-08-14 03:35:15,565][18675] Updated weights for policy 0, policy_version 650 (0.0022) |
|
[2025-08-14 03:35:19,630][16404] Fps is (10 sec: 3684.1, 60 sec: 3959.1, 300 sec: 3734.9). Total num frames: 2670592. Throughput: 0: 985.6. Samples: 668414. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:35:19,631][16404] Avg episode reward: [(0, '18.843')] |
|
[2025-08-14 03:35:24,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3748.9). Total num frames: 2695168. Throughput: 0: 990.1. Samples: 671494. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:35:24,625][16404] Avg episode reward: [(0, '18.698')] |
|
[2025-08-14 03:35:26,321][18675] Updated weights for policy 0, policy_version 660 (0.0014) |
|
[2025-08-14 03:35:29,624][16404] Fps is (10 sec: 4508.3, 60 sec: 3959.5, 300 sec: 3776.6). Total num frames: 2715648. Throughput: 0: 987.9. Samples: 678296. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:35:29,631][16404] Avg episode reward: [(0, '18.364')] |
|
[2025-08-14 03:35:34,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3748.9). Total num frames: 2732032. Throughput: 0: 984.4. Samples: 683142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:35:34,627][16404] Avg episode reward: [(0, '17.625')] |
|
[2025-08-14 03:35:37,282][18675] Updated weights for policy 0, policy_version 670 (0.0014) |
|
[2025-08-14 03:35:39,624][16404] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3762.8). Total num frames: 2752512. Throughput: 0: 987.4. Samples: 686246. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:35:39,630][16404] Avg episode reward: [(0, '16.335')] |
|
[2025-08-14 03:35:44,624][16404] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3790.5). Total num frames: 2777088. Throughput: 0: 988.9. Samples: 693138. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:35:44,625][16404] Avg episode reward: [(0, '16.333')] |
|
[2025-08-14 03:35:47,085][18675] Updated weights for policy 0, policy_version 680 (0.0015) |
|
[2025-08-14 03:35:49,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3762.8). Total num frames: 2789376. Throughput: 0: 993.2. Samples: 698168. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:35:49,625][16404] Avg episode reward: [(0, '16.619')] |
|
[2025-08-14 03:35:54,626][16404] Fps is (10 sec: 3685.6, 60 sec: 3959.3, 300 sec: 3790.6). Total num frames: 2813952. Throughput: 0: 995.7. Samples: 701260. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:35:54,630][16404] Avg episode reward: [(0, '18.566')] |
|
[2025-08-14 03:35:56,841][18675] Updated weights for policy 0, policy_version 690 (0.0025) |
|
[2025-08-14 03:35:59,625][16404] Fps is (10 sec: 4914.7, 60 sec: 4027.7, 300 sec: 3818.3). Total num frames: 2838528. Throughput: 0: 997.0. Samples: 708218. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:35:59,630][16404] Avg episode reward: [(0, '19.381')] |
|
[2025-08-14 03:36:04,624][16404] Fps is (10 sec: 3687.0, 60 sec: 3959.4, 300 sec: 3790.6). Total num frames: 2850816. Throughput: 0: 994.3. Samples: 713150. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:36:04,634][16404] Avg episode reward: [(0, '20.342')] |
|
[2025-08-14 03:36:07,771][18675] Updated weights for policy 0, policy_version 700 (0.0018) |
|
[2025-08-14 03:36:09,624][16404] Fps is (10 sec: 3686.8, 60 sec: 4027.7, 300 sec: 3804.4). Total num frames: 2875392. Throughput: 0: 996.2. Samples: 716324. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:36:09,630][16404] Avg episode reward: [(0, '20.428')] |
|
[2025-08-14 03:36:14,624][16404] Fps is (10 sec: 4505.8, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2895872. Throughput: 0: 999.2. Samples: 723262. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:36:14,625][16404] Avg episode reward: [(0, '18.586')] |
|
[2025-08-14 03:36:17,934][18675] Updated weights for policy 0, policy_version 710 (0.0028) |
|
[2025-08-14 03:36:19,624][16404] Fps is (10 sec: 3686.3, 60 sec: 4028.1, 300 sec: 3818.3). Total num frames: 2912256. Throughput: 0: 1002.4. Samples: 728252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:36:19,630][16404] Avg episode reward: [(0, '18.842')] |
|
[2025-08-14 03:36:24,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 2932736. Throughput: 0: 1004.1. Samples: 731432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:36:24,630][16404] Avg episode reward: [(0, '18.833')] |
|
[2025-08-14 03:36:27,425][18675] Updated weights for policy 0, policy_version 720 (0.0024) |
|
[2025-08-14 03:36:29,627][16404] Fps is (10 sec: 4504.3, 60 sec: 4027.5, 300 sec: 3859.9). Total num frames: 2957312. Throughput: 0: 1004.2. Samples: 738328. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:36:29,628][16404] Avg episode reward: [(0, '19.679')] |
|
[2025-08-14 03:36:34,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2969600. Throughput: 0: 1000.2. Samples: 743178. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-14 03:36:34,625][16404] Avg episode reward: [(0, '20.042')] |
|
[2025-08-14 03:36:38,409][18675] Updated weights for policy 0, policy_version 730 (0.0025) |
|
[2025-08-14 03:36:39,624][16404] Fps is (10 sec: 3687.5, 60 sec: 4027.7, 300 sec: 3846.1). Total num frames: 2994176. Throughput: 0: 1000.2. Samples: 746268. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:36:39,628][16404] Avg episode reward: [(0, '20.639')] |
|
[2025-08-14 03:36:44,628][16404] Fps is (10 sec: 4503.8, 60 sec: 3959.2, 300 sec: 3873.8). Total num frames: 3014656. Throughput: 0: 998.0. Samples: 753132. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:36:44,633][16404] Avg episode reward: [(0, '21.608')] |
|
[2025-08-14 03:36:44,649][18662] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000736_3014656.pth... |
|
[2025-08-14 03:36:44,827][18662] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000736_3014656.pth |
|
[2025-08-14 03:36:44,842][18662] Saving new best policy, reward=21.608! |
|
[2025-08-14 03:36:49,410][18675] Updated weights for policy 0, policy_version 740 (0.0019) |
|
[2025-08-14 03:36:49,624][16404] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3860.0). Total num frames: 3031040. Throughput: 0: 996.3. Samples: 757984. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:36:49,628][16404] Avg episode reward: [(0, '21.758')] |
|
[2025-08-14 03:36:49,633][18662] Saving new best policy, reward=21.758! |
|
[2025-08-14 03:36:54,624][16404] Fps is (10 sec: 3687.9, 60 sec: 3959.6, 300 sec: 3860.0). Total num frames: 3051520. Throughput: 0: 995.4. Samples: 761116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:36:54,625][16404] Avg episode reward: [(0, '21.043')] |
|
[2025-08-14 03:36:58,384][18675] Updated weights for policy 0, policy_version 750 (0.0015) |
|
[2025-08-14 03:36:59,631][16404] Fps is (10 sec: 4502.4, 60 sec: 3959.1, 300 sec: 3915.4). Total num frames: 3076096. Throughput: 0: 993.0. Samples: 767952. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:36:59,632][16404] Avg episode reward: [(0, '20.524')] |
|
[2025-08-14 03:37:04,625][16404] Fps is (10 sec: 3685.9, 60 sec: 3959.4, 300 sec: 3887.7). Total num frames: 3088384. Throughput: 0: 987.4. Samples: 772686. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:37:04,627][16404] Avg episode reward: [(0, '20.625')] |
|
[2025-08-14 03:37:09,506][18675] Updated weights for policy 0, policy_version 760 (0.0017) |
|
[2025-08-14 03:37:09,624][16404] Fps is (10 sec: 3688.9, 60 sec: 3959.4, 300 sec: 3901.6). Total num frames: 3112960. Throughput: 0: 986.2. Samples: 775810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:37:09,629][16404] Avg episode reward: [(0, '19.678')] |
|
[2025-08-14 03:37:14,624][16404] Fps is (10 sec: 4506.2, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3133440. Throughput: 0: 985.2. Samples: 782658. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:37:14,630][16404] Avg episode reward: [(0, '19.810')] |
|
[2025-08-14 03:37:19,627][16404] Fps is (10 sec: 3275.8, 60 sec: 3891.0, 300 sec: 3901.6). Total num frames: 3145728. Throughput: 0: 983.7. Samples: 787446. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-14 03:37:19,631][16404] Avg episode reward: [(0, '20.700')] |
|
[2025-08-14 03:37:20,785][18675] Updated weights for policy 0, policy_version 770 (0.0020) |
|
[2025-08-14 03:37:24,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3170304. Throughput: 0: 979.8. Samples: 790358. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:37:24,625][16404] Avg episode reward: [(0, '20.745')] |
|
[2025-08-14 03:37:29,628][16404] Fps is (10 sec: 4505.0, 60 sec: 3891.1, 300 sec: 3929.4). Total num frames: 3190784. Throughput: 0: 971.9. Samples: 796870. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:37:29,630][16404] Avg episode reward: [(0, '20.719')] |
|
[2025-08-14 03:37:30,563][18675] Updated weights for policy 0, policy_version 780 (0.0017) |
|
[2025-08-14 03:37:34,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 3203072. Throughput: 0: 965.3. Samples: 801422. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:37:34,626][16404] Avg episode reward: [(0, '21.147')] |
|
[2025-08-14 03:37:39,624][16404] Fps is (10 sec: 3278.3, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 3223552. Throughput: 0: 958.8. Samples: 804260. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:37:39,625][16404] Avg episode reward: [(0, '21.871')] |
|
[2025-08-14 03:37:39,627][18662] Saving new best policy, reward=21.871! |
|
[2025-08-14 03:37:41,661][18675] Updated weights for policy 0, policy_version 790 (0.0020) |
|
[2025-08-14 03:37:44,624][16404] Fps is (10 sec: 4505.6, 60 sec: 3891.5, 300 sec: 3943.3). Total num frames: 3248128. Throughput: 0: 954.1. Samples: 810882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:37:44,630][16404] Avg episode reward: [(0, '22.179')] |
|
[2025-08-14 03:37:44,649][18662] Saving new best policy, reward=22.179! |
|
[2025-08-14 03:37:49,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 3260416. Throughput: 0: 953.4. Samples: 815590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:37:49,625][16404] Avg episode reward: [(0, '22.308')] |
|
[2025-08-14 03:37:49,627][18662] Saving new best policy, reward=22.308! |
|
[2025-08-14 03:37:53,269][18675] Updated weights for policy 0, policy_version 800 (0.0027) |
|
[2025-08-14 03:37:54,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 3280896. Throughput: 0: 943.8. Samples: 818282. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-14 03:37:54,627][16404] Avg episode reward: [(0, '23.119')] |
|
[2025-08-14 03:37:54,634][18662] Saving new best policy, reward=23.119! |
|
[2025-08-14 03:37:59,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3755.1, 300 sec: 3929.4). Total num frames: 3301376. Throughput: 0: 933.1. Samples: 824646. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:37:59,627][16404] Avg episode reward: [(0, '23.141')] |
|
[2025-08-14 03:37:59,631][18662] Saving new best policy, reward=23.141! |
|
[2025-08-14 03:38:04,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3901.6). Total num frames: 3313664. Throughput: 0: 930.2. Samples: 829302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:38:04,625][16404] Avg episode reward: [(0, '22.406')] |
|
[2025-08-14 03:38:05,054][18675] Updated weights for policy 0, policy_version 810 (0.0021) |
|
[2025-08-14 03:38:09,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3915.5). Total num frames: 3338240. Throughput: 0: 924.4. Samples: 831958. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:38:09,625][16404] Avg episode reward: [(0, '22.015')] |
|
[2025-08-14 03:38:14,112][18675] Updated weights for policy 0, policy_version 820 (0.0021) |
|
[2025-08-14 03:38:14,624][16404] Fps is (10 sec: 4505.5, 60 sec: 3754.7, 300 sec: 3929.4). Total num frames: 3358720. Throughput: 0: 932.0. Samples: 838806. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:38:14,625][16404] Avg episode reward: [(0, '22.542')] |
|
[2025-08-14 03:38:19,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3915.5). Total num frames: 3375104. Throughput: 0: 944.9. Samples: 843944. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-14 03:38:19,627][16404] Avg episode reward: [(0, '21.158')] |
|
[2025-08-14 03:38:24,624][16404] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3901.6). Total num frames: 3391488. Throughput: 0: 936.3. Samples: 846392. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:38:24,626][16404] Avg episode reward: [(0, '20.648')] |
|
[2025-08-14 03:38:25,797][18675] Updated weights for policy 0, policy_version 830 (0.0022) |
|
[2025-08-14 03:38:29,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3686.7, 300 sec: 3915.5). Total num frames: 3411968. Throughput: 0: 929.5. Samples: 852708. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:38:29,627][16404] Avg episode reward: [(0, '21.589')] |
|
[2025-08-14 03:38:34,624][16404] Fps is (10 sec: 3686.2, 60 sec: 3754.6, 300 sec: 3901.6). Total num frames: 3428352. Throughput: 0: 933.9. Samples: 857614. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:38:34,628][16404] Avg episode reward: [(0, '22.507')] |
|
[2025-08-14 03:38:37,710][18675] Updated weights for policy 0, policy_version 840 (0.0021) |
|
[2025-08-14 03:38:39,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3887.7). Total num frames: 3448832. Throughput: 0: 925.3. Samples: 859920. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:38:39,625][16404] Avg episode reward: [(0, '20.955')] |
|
[2025-08-14 03:38:44,624][16404] Fps is (10 sec: 4096.2, 60 sec: 3686.4, 300 sec: 3915.5). Total num frames: 3469312. Throughput: 0: 921.2. Samples: 866100. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:38:44,629][16404] Avg episode reward: [(0, '20.600')] |
|
[2025-08-14 03:38:44,641][18662] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000847_3469312.pth... |
|
[2025-08-14 03:38:44,763][18662] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000847_3469312.pth |
|
[2025-08-14 03:38:48,661][18675] Updated weights for policy 0, policy_version 850 (0.0019) |
|
[2025-08-14 03:38:49,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3873.8). Total num frames: 3481600. Throughput: 0: 926.0. Samples: 870970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:38:49,627][16404] Avg episode reward: [(0, '20.995')] |
|
[2025-08-14 03:38:54,624][16404] Fps is (10 sec: 2867.1, 60 sec: 3618.1, 300 sec: 3860.0). Total num frames: 3497984. Throughput: 0: 913.7. Samples: 873074. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:38:54,626][16404] Avg episode reward: [(0, '22.473')] |
|
[2025-08-14 03:38:59,624][16404] Fps is (10 sec: 3686.2, 60 sec: 3618.1, 300 sec: 3873.8). Total num frames: 3518464. Throughput: 0: 900.0. Samples: 879306. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:38:59,630][16404] Avg episode reward: [(0, '22.327')] |
|
[2025-08-14 03:38:59,797][18675] Updated weights for policy 0, policy_version 860 (0.0020) |
|
[2025-08-14 03:39:04,627][16404] Fps is (10 sec: 3685.3, 60 sec: 3686.2, 300 sec: 3859.9). Total num frames: 3534848. Throughput: 0: 900.7. Samples: 884478. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:39:04,629][16404] Avg episode reward: [(0, '21.875')] |
|
[2025-08-14 03:39:09,624][16404] Fps is (10 sec: 3277.0, 60 sec: 3549.9, 300 sec: 3832.2). Total num frames: 3551232. Throughput: 0: 888.6. Samples: 886378. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:39:09,630][16404] Avg episode reward: [(0, '21.924')] |
|
[2025-08-14 03:39:11,805][18675] Updated weights for policy 0, policy_version 870 (0.0020) |
|
[2025-08-14 03:39:14,624][16404] Fps is (10 sec: 3687.6, 60 sec: 3549.9, 300 sec: 3860.0). Total num frames: 3571712. Throughput: 0: 885.8. Samples: 892570. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:39:14,627][16404] Avg episode reward: [(0, '23.211')] |
|
[2025-08-14 03:39:14,639][18662] Saving new best policy, reward=23.211! |
|
[2025-08-14 03:39:19,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3832.2). Total num frames: 3588096. Throughput: 0: 898.4. Samples: 898040. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:39:19,628][16404] Avg episode reward: [(0, '23.576')] |
|
[2025-08-14 03:39:19,635][18662] Saving new best policy, reward=23.576! |
|
[2025-08-14 03:39:23,759][18675] Updated weights for policy 0, policy_version 880 (0.0022) |
|
[2025-08-14 03:39:24,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3818.3). Total num frames: 3604480. Throughput: 0: 886.5. Samples: 899814. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:39:24,628][16404] Avg episode reward: [(0, '22.587')] |
|
[2025-08-14 03:39:29,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3832.2). Total num frames: 3624960. Throughput: 0: 884.5. Samples: 905902. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-08-14 03:39:29,629][16404] Avg episode reward: [(0, '23.012')] |
|
[2025-08-14 03:39:34,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3818.3). Total num frames: 3641344. Throughput: 0: 894.8. Samples: 911238. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-08-14 03:39:34,625][16404] Avg episode reward: [(0, '23.739')] |
|
[2025-08-14 03:39:34,639][18662] Saving new best policy, reward=23.739! |
|
[2025-08-14 03:39:35,056][18675] Updated weights for policy 0, policy_version 890 (0.0041) |
|
[2025-08-14 03:39:39,624][16404] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3790.5). Total num frames: 3657728. Throughput: 0: 886.5. Samples: 912968. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:39:39,631][16404] Avg episode reward: [(0, '25.204')] |
|
[2025-08-14 03:39:39,637][18662] Saving new best policy, reward=25.204! |
|
[2025-08-14 03:39:44,624][16404] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3818.3). Total num frames: 3678208. Throughput: 0: 880.8. Samples: 918942. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:39:44,625][16404] Avg episode reward: [(0, '23.530')] |
|
[2025-08-14 03:39:46,051][18675] Updated weights for policy 0, policy_version 900 (0.0026) |
|
[2025-08-14 03:39:49,625][16404] Fps is (10 sec: 3686.0, 60 sec: 3549.8, 300 sec: 3790.5). Total num frames: 3694592. Throughput: 0: 886.4. Samples: 924366. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:39:49,630][16404] Avg episode reward: [(0, '22.332')] |
|
[2025-08-14 03:39:54,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3776.7). Total num frames: 3710976. Throughput: 0: 885.2. Samples: 926210. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-14 03:39:54,625][16404] Avg episode reward: [(0, '22.207')] |
|
[2025-08-14 03:39:58,299][18675] Updated weights for policy 0, policy_version 910 (0.0034) |
|
[2025-08-14 03:39:59,624][16404] Fps is (10 sec: 3686.9, 60 sec: 3549.9, 300 sec: 3790.5). Total num frames: 3731456. Throughput: 0: 876.2. Samples: 932000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:39:59,629][16404] Avg episode reward: [(0, '20.678')] |
|
[2025-08-14 03:40:04,630][16404] Fps is (10 sec: 3684.2, 60 sec: 3549.7, 300 sec: 3776.6). Total num frames: 3747840. Throughput: 0: 882.4. Samples: 937754. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:40:04,631][16404] Avg episode reward: [(0, '20.070')] |
|
[2025-08-14 03:40:09,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3748.9). Total num frames: 3764224. Throughput: 0: 883.8. Samples: 939584. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:40:09,625][16404] Avg episode reward: [(0, '19.720')] |
|
[2025-08-14 03:40:10,529][18675] Updated weights for policy 0, policy_version 920 (0.0018) |
|
[2025-08-14 03:40:14,624][16404] Fps is (10 sec: 3688.6, 60 sec: 3549.9, 300 sec: 3776.7). Total num frames: 3784704. Throughput: 0: 871.2. Samples: 945108. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:40:14,628][16404] Avg episode reward: [(0, '20.599')] |
|
[2025-08-14 03:40:19,626][16404] Fps is (10 sec: 3685.6, 60 sec: 3549.7, 300 sec: 3748.9). Total num frames: 3801088. Throughput: 0: 885.9. Samples: 951106. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:40:19,628][16404] Avg episode reward: [(0, '21.689')] |
|
[2025-08-14 03:40:21,926][18675] Updated weights for policy 0, policy_version 930 (0.0028) |
|
[2025-08-14 03:40:24,624][16404] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3721.1). Total num frames: 3813376. Throughput: 0: 888.3. Samples: 952942. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-14 03:40:24,625][16404] Avg episode reward: [(0, '22.107')] |
|
[2025-08-14 03:40:29,624][16404] Fps is (10 sec: 3277.5, 60 sec: 3481.6, 300 sec: 3735.0). Total num frames: 3833856. Throughput: 0: 872.3. Samples: 958194. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-14 03:40:29,625][16404] Avg episode reward: [(0, '22.604')] |
|
[2025-08-14 03:40:32,824][18675] Updated weights for policy 0, policy_version 940 (0.0021) |
|
[2025-08-14 03:40:34,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3735.0). Total num frames: 3854336. Throughput: 0: 885.7. Samples: 964222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:40:34,626][16404] Avg episode reward: [(0, '22.664')] |
|
[2025-08-14 03:40:39,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3693.3). Total num frames: 3866624. Throughput: 0: 885.2. Samples: 966044. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:40:39,625][16404] Avg episode reward: [(0, '22.806')] |
|
[2025-08-14 03:40:44,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3721.1). Total num frames: 3887104. Throughput: 0: 870.6. Samples: 971178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:40:44,626][16404] Avg episode reward: [(0, '24.012')] |
|
[2025-08-14 03:40:44,632][18662] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000949_3887104.pth... |
|
[2025-08-14 03:40:44,760][18662] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000889_3641344.pth |
|
[2025-08-14 03:40:45,203][18675] Updated weights for policy 0, policy_version 950 (0.0021) |
|
[2025-08-14 03:40:49,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3550.0, 300 sec: 3707.3). Total num frames: 3907584. Throughput: 0: 875.3. Samples: 977136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:40:49,630][16404] Avg episode reward: [(0, '24.398')] |
|
[2025-08-14 03:40:54,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3665.6). Total num frames: 3919872. Throughput: 0: 879.6. Samples: 979164. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:40:54,630][16404] Avg episode reward: [(0, '25.579')] |
|
[2025-08-14 03:40:54,642][18662] Saving new best policy, reward=25.579! |
|
[2025-08-14 03:40:57,542][18675] Updated weights for policy 0, policy_version 960 (0.0029) |
|
[2025-08-14 03:40:59,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3693.3). Total num frames: 3940352. Throughput: 0: 865.8. Samples: 984068. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:40:59,629][16404] Avg episode reward: [(0, '24.287')] |
|
[2025-08-14 03:41:04,624][16404] Fps is (10 sec: 4096.0, 60 sec: 3550.2, 300 sec: 3679.5). Total num frames: 3960832. Throughput: 0: 867.7. Samples: 990150. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:41:04,625][16404] Avg episode reward: [(0, '23.930')] |
|
[2025-08-14 03:41:09,145][18675] Updated weights for policy 0, policy_version 970 (0.0013) |
|
[2025-08-14 03:41:09,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3651.7). Total num frames: 3973120. Throughput: 0: 876.0. Samples: 992364. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-14 03:41:09,635][16404] Avg episode reward: [(0, '23.882')] |
|
[2025-08-14 03:41:14,624][16404] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3665.6). Total num frames: 3993600. Throughput: 0: 878.4. Samples: 997720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-14 03:41:14,630][16404] Avg episode reward: [(0, '23.051')] |
|
[2025-08-14 03:41:16,838][18662] Stopping Batcher_0... |
|
[2025-08-14 03:41:16,840][18662] Loop batcher_evt_loop terminating... |
|
[2025-08-14 03:41:16,840][16404] Component Batcher_0 stopped! |
|
[2025-08-14 03:41:16,847][18662] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:41:16,900][18675] Weights refcount: 2 0 |
|
[2025-08-14 03:41:16,903][18675] Stopping InferenceWorker_p0-w0... |
|
[2025-08-14 03:41:16,903][18675] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-08-14 03:41:16,904][16404] Component InferenceWorker_p0-w0 stopped! |
|
[2025-08-14 03:41:17,014][18662] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:41:17,176][16404] Component LearnerWorker_p0 stopped! |
|
[2025-08-14 03:41:17,182][18662] Stopping LearnerWorker_p0... |
|
[2025-08-14 03:41:17,186][18662] Loop learner_proc0_evt_loop terminating... |
|
[2025-08-14 03:41:17,205][16404] Component RolloutWorker_w3 stopped! |
|
[2025-08-14 03:41:17,206][18681] Stopping RolloutWorker_w7... |
|
[2025-08-14 03:41:17,204][18679] Stopping RolloutWorker_w3... |
|
[2025-08-14 03:41:17,206][16404] Component RolloutWorker_w7 stopped! |
|
[2025-08-14 03:41:17,207][18679] Loop rollout_proc3_evt_loop terminating... |
|
[2025-08-14 03:41:17,207][18681] Loop rollout_proc7_evt_loop terminating... |
|
[2025-08-14 03:41:17,212][16404] Component RolloutWorker_w5 stopped! |
|
[2025-08-14 03:41:17,213][18680] Stopping RolloutWorker_w5... |
|
[2025-08-14 03:41:17,214][18680] Loop rollout_proc5_evt_loop terminating... |
|
[2025-08-14 03:41:17,217][16404] Component RolloutWorker_w1 stopped! |
|
[2025-08-14 03:41:17,218][18677] Stopping RolloutWorker_w1... |
|
[2025-08-14 03:41:17,219][18677] Loop rollout_proc1_evt_loop terminating... |
|
[2025-08-14 03:41:17,379][16404] Component RolloutWorker_w4 stopped! |
|
[2025-08-14 03:41:17,379][18682] Stopping RolloutWorker_w4... |
|
[2025-08-14 03:41:17,385][18682] Loop rollout_proc4_evt_loop terminating... |
|
[2025-08-14 03:41:17,392][16404] Component RolloutWorker_w2 stopped! |
|
[2025-08-14 03:41:17,393][18678] Stopping RolloutWorker_w2... |
|
[2025-08-14 03:41:17,401][18678] Loop rollout_proc2_evt_loop terminating... |
|
[2025-08-14 03:41:17,415][16404] Component RolloutWorker_w6 stopped! |
|
[2025-08-14 03:41:17,415][18683] Stopping RolloutWorker_w6... |
|
[2025-08-14 03:41:17,420][18683] Loop rollout_proc6_evt_loop terminating... |
|
[2025-08-14 03:41:17,481][16404] Component RolloutWorker_w0 stopped! |
|
[2025-08-14 03:41:17,486][16404] Waiting for process learner_proc0 to stop... |
|
[2025-08-14 03:41:17,481][18676] Stopping RolloutWorker_w0... |
|
[2025-08-14 03:41:17,500][18676] Loop rollout_proc0_evt_loop terminating... |
|
[2025-08-14 03:41:19,033][16404] Waiting for process inference_proc0-0 to join... |
|
[2025-08-14 03:41:19,036][16404] Waiting for process rollout_proc0 to join... |
|
[2025-08-14 03:41:21,844][16404] Waiting for process rollout_proc1 to join... |
|
[2025-08-14 03:41:21,846][16404] Waiting for process rollout_proc2 to join... |
|
[2025-08-14 03:41:21,847][16404] Waiting for process rollout_proc3 to join... |
|
[2025-08-14 03:41:21,848][16404] Waiting for process rollout_proc4 to join... |
|
[2025-08-14 03:41:21,849][16404] Waiting for process rollout_proc5 to join... |
|
[2025-08-14 03:41:21,850][16404] Waiting for process rollout_proc6 to join... |
|
[2025-08-14 03:41:21,851][16404] Waiting for process rollout_proc7 to join... |
|
[2025-08-14 03:41:21,852][16404] Batcher 0 profile tree view: |
|
batching: 28.0669, releasing_batches: 0.0272 |
|
[2025-08-14 03:41:21,853][16404] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0044 |
|
wait_policy_total: 408.5799 |
|
update_model: 8.8764 |
|
weight_update: 0.0020 |
|
one_step: 0.0030 |
|
handle_policy_step: 629.4639 |
|
deserialize: 15.2555, stack: 3.2888, obs_to_device_normalize: 129.2140, forward: 328.4134, send_messages: 30.6949 |
|
prepare_outputs: 95.6708 |
|
to_cpu: 58.0612 |
|
[2025-08-14 03:41:21,855][16404] Learner 0 profile tree view: |
|
misc: 0.0055, prepare_batch: 12.3545 |
|
train: 73.5863 |
|
epoch_init: 0.0095, minibatch_init: 0.0130, losses_postprocess: 0.6500, kl_divergence: 0.8473, after_optimizer: 33.3674 |
|
calculate_losses: 25.9014 |
|
losses_init: 0.0035, forward_head: 1.4532, bptt_initial: 16.8464, tail: 1.1807, advantages_returns: 0.3145, losses: 3.6505 |
|
bptt: 2.1651 |
|
bptt_forward_core: 2.0567 |
|
update: 12.1816 |
|
clip: 1.0521 |
|
[2025-08-14 03:41:21,856][16404] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.2655, enqueue_policy_requests: 105.2324, env_step: 856.5435, overhead: 14.4798, complete_rollouts: 7.4345 |
|
save_policy_outputs: 19.8488 |
|
split_output_tensors: 7.7220 |
|
[2025-08-14 03:41:21,857][16404] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.3721, enqueue_policy_requests: 111.9812, env_step: 846.2346, overhead: 14.2123, complete_rollouts: 7.3749 |
|
save_policy_outputs: 20.2849 |
|
split_output_tensors: 7.4404 |
|
[2025-08-14 03:41:21,858][16404] Loop Runner_EvtLoop terminating... |
|
[2025-08-14 03:41:21,859][16404] Runner profile tree view: |
|
main_loop: 1111.7620 |
|
[2025-08-14 03:41:21,860][16404] Collected {0: 4005888}, FPS: 3603.2 |
|
[2025-08-14 03:41:58,259][16404] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-14 03:41:58,260][16404] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-14 03:41:58,261][16404] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-14 03:41:58,262][16404] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-14 03:41:58,263][16404] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-14 03:41:58,264][16404] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-14 03:41:58,265][16404] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-14 03:41:58,266][16404] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-14 03:41:58,267][16404] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-14 03:41:58,268][16404] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-14 03:41:58,269][16404] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-14 03:41:58,270][16404] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-14 03:41:58,271][16404] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-14 03:41:58,272][16404] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-14 03:41:58,273][16404] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-14 03:41:58,305][16404] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-14 03:41:58,308][16404] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-14 03:41:58,310][16404] RunningMeanStd input shape: (1,) |
|
[2025-08-14 03:41:58,325][16404] ConvEncoder: input_channels=3 |
|
[2025-08-14 03:41:58,439][16404] Conv encoder output size: 512 |
|
[2025-08-14 03:41:58,440][16404] Policy head output size: 512 |
|
[2025-08-14 03:41:58,624][16404] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:41:58,627][16404] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-14 03:41:58,629][16404] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:41:58,632][16404] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-14 03:41:58,634][16404] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:41:58,637][16404] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-14 03:53:46,682][16404] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-14 03:53:46,683][16404] Overriding arg 'env' with value 'doom_benchmark' passed from command line |
|
[2025-08-14 03:53:46,685][16404] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-14 03:53:46,686][16404] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-14 03:53:46,687][16404] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-14 03:53:46,689][16404] Adding new argument 'video_frames'=5000 that is not in the saved config file! |
|
[2025-08-14 03:53:46,690][16404] Adding new argument 'video_name'='vizdoom_eval' that is not in the saved config file! |
|
[2025-08-14 03:53:46,691][16404] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-14 03:53:46,692][16404] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-14 03:53:46,693][16404] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-14 03:53:46,694][16404] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-14 03:53:46,695][16404] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-14 03:53:46,697][16404] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-14 03:53:46,698][16404] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-14 03:53:46,699][16404] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-14 03:53:46,701][16404] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-14 03:53:46,730][16404] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-14 03:53:46,731][16404] RunningMeanStd input shape: (1,) |
|
[2025-08-14 03:53:46,742][16404] ConvEncoder: input_channels=3 |
|
[2025-08-14 03:53:46,784][16404] Conv encoder output size: 512 |
|
[2025-08-14 03:53:46,785][16404] Policy head output size: 512 |
|
[2025-08-14 03:53:46,805][16404] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:56:31,664][16404] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-14 03:56:31,665][16404] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-14 03:56:31,666][16404] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-14 03:56:31,668][16404] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-14 03:56:31,669][16404] Adding new argument 'video_frames'=5000 that is not in the saved config file! |
|
[2025-08-14 03:56:31,670][16404] Adding new argument 'video_name'='vizdoom_eval' that is not in the saved config file! |
|
[2025-08-14 03:56:31,671][16404] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-14 03:56:31,672][16404] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-14 03:56:31,673][16404] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-14 03:56:31,674][16404] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-14 03:56:31,675][16404] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-14 03:56:31,676][16404] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-14 03:56:31,677][16404] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-14 03:56:31,678][16404] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-14 03:56:31,679][16404] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-14 03:56:31,730][16404] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-14 03:56:31,733][16404] RunningMeanStd input shape: (1,) |
|
[2025-08-14 03:56:31,752][16404] ConvEncoder: input_channels=3 |
|
[2025-08-14 03:56:31,816][16404] Conv encoder output size: 512 |
|
[2025-08-14 03:56:31,817][16404] Policy head output size: 512 |
|
[2025-08-14 03:56:31,856][16404] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 03:56:32,985][16404] Num frames 100... |
|
[2025-08-14 03:56:33,190][16404] Num frames 200... |
|
[2025-08-14 03:56:33,386][16404] Num frames 300... |
|
[2025-08-14 03:56:33,595][16404] Num frames 400... |
|
[2025-08-14 03:56:33,787][16404] Num frames 500... |
|
[2025-08-14 03:56:33,927][16404] Num frames 600... |
|
[2025-08-14 03:56:34,042][16404] Avg episode rewards: #0: 14.400, true rewards: #0: 6.400 |
|
[2025-08-14 03:56:34,044][16404] Avg episode reward: 14.400, avg true_objective: 6.400 |
|
[2025-08-14 03:56:34,146][16404] Num frames 700... |
|
[2025-08-14 03:56:34,292][16404] Num frames 800... |
|
[2025-08-14 03:56:34,434][16404] Num frames 900... |
|
[2025-08-14 03:56:34,580][16404] Num frames 1000... |
|
[2025-08-14 03:56:34,722][16404] Num frames 1100... |
|
[2025-08-14 03:56:34,872][16404] Num frames 1200... |
|
[2025-08-14 03:56:35,013][16404] Num frames 1300... |
|
[2025-08-14 03:56:35,157][16404] Num frames 1400... |
|
[2025-08-14 03:56:35,225][16404] Avg episode rewards: #0: 17.040, true rewards: #0: 7.040 |
|
[2025-08-14 03:56:35,226][16404] Avg episode reward: 17.040, avg true_objective: 7.040 |
|
[2025-08-14 03:56:35,352][16404] Num frames 1500... |
|
[2025-08-14 03:56:35,487][16404] Num frames 1600... |
|
[2025-08-14 03:56:35,621][16404] Num frames 1700... |
|
[2025-08-14 03:56:35,752][16404] Num frames 1800... |
|
[2025-08-14 03:56:35,886][16404] Num frames 1900... |
|
[2025-08-14 03:56:36,021][16404] Num frames 2000... |
|
[2025-08-14 03:56:36,175][16404] Num frames 2100... |
|
[2025-08-14 03:56:36,316][16404] Num frames 2200... |
|
[2025-08-14 03:56:36,455][16404] Num frames 2300... |
|
[2025-08-14 03:56:36,602][16404] Avg episode rewards: #0: 17.227, true rewards: #0: 7.893 |
|
[2025-08-14 03:56:36,603][16404] Avg episode reward: 17.227, avg true_objective: 7.893 |
|
[2025-08-14 03:56:36,650][16404] Num frames 2400... |
|
[2025-08-14 03:56:36,787][16404] Num frames 2500... |
|
[2025-08-14 03:56:36,937][16404] Num frames 2600... |
|
[2025-08-14 03:56:37,079][16404] Num frames 2700... |
|
[2025-08-14 03:56:37,234][16404] Num frames 2800... |
|
[2025-08-14 03:56:37,385][16404] Num frames 2900... |
|
[2025-08-14 03:56:37,530][16404] Num frames 3000... |
|
[2025-08-14 03:56:37,674][16404] Num frames 3100... |
|
[2025-08-14 03:56:37,815][16404] Num frames 3200... |
|
[2025-08-14 03:56:37,957][16404] Num frames 3300... |
|
[2025-08-14 03:56:38,097][16404] Num frames 3400... |
|
[2025-08-14 03:56:38,256][16404] Num frames 3500... |
|
[2025-08-14 03:56:38,397][16404] Num frames 3600... |
|
[2025-08-14 03:56:38,541][16404] Num frames 3700... |
|
[2025-08-14 03:56:38,686][16404] Num frames 3800... |
|
[2025-08-14 03:56:38,836][16404] Num frames 3900... |
|
[2025-08-14 03:56:38,989][16404] Avg episode rewards: #0: 22.670, true rewards: #0: 9.920 |
|
[2025-08-14 03:56:38,990][16404] Avg episode reward: 22.670, avg true_objective: 9.920 |
|
[2025-08-14 03:56:39,039][16404] Num frames 4000... |
|
[2025-08-14 03:56:39,179][16404] Num frames 4100... |
|
[2025-08-14 03:56:39,342][16404] Num frames 4200... |
|
[2025-08-14 03:56:39,490][16404] Num frames 4300... |
|
[2025-08-14 03:56:39,630][16404] Num frames 4400... |
|
[2025-08-14 03:56:39,778][16404] Num frames 4500... |
|
[2025-08-14 03:56:39,922][16404] Num frames 4600... |
|
[2025-08-14 03:56:40,069][16404] Num frames 4700... |
|
[2025-08-14 03:56:40,213][16404] Num frames 4800... |
|
[2025-08-14 03:56:40,373][16404] Avg episode rewards: #0: 21.528, true rewards: #0: 9.728 |
|
[2025-08-14 03:56:40,375][16404] Avg episode reward: 21.528, avg true_objective: 9.728 |
|
[2025-08-14 03:56:40,430][16404] Num frames 4900... |
|
[2025-08-14 03:56:40,573][16404] Num frames 5000... |
|
[2025-08-14 03:56:40,693][16404] Num frames 5100... |
|
[2025-08-14 03:56:40,812][16404] Num frames 5200... |
|
[2025-08-14 03:56:40,935][16404] Num frames 5300... |
|
[2025-08-14 03:56:41,054][16404] Num frames 5400... |
|
[2025-08-14 03:56:41,171][16404] Num frames 5500... |
|
[2025-08-14 03:56:41,293][16404] Num frames 5600... |
|
[2025-08-14 03:56:41,352][16404] Avg episode rewards: #0: 20.502, true rewards: #0: 9.335 |
|
[2025-08-14 03:56:41,353][16404] Avg episode reward: 20.502, avg true_objective: 9.335 |
|
[2025-08-14 03:56:41,470][16404] Num frames 5700... |
|
[2025-08-14 03:56:41,585][16404] Num frames 5800... |
|
[2025-08-14 03:56:41,703][16404] Num frames 5900... |
|
[2025-08-14 03:56:41,820][16404] Num frames 6000... |
|
[2025-08-14 03:56:41,937][16404] Num frames 6100... |
|
[2025-08-14 03:56:42,056][16404] Num frames 6200... |
|
[2025-08-14 03:56:42,174][16404] Num frames 6300... |
|
[2025-08-14 03:56:42,294][16404] Num frames 6400... |
|
[2025-08-14 03:56:42,428][16404] Num frames 6500... |
|
[2025-08-14 03:56:42,544][16404] Num frames 6600... |
|
[2025-08-14 03:56:42,631][16404] Avg episode rewards: #0: 21.179, true rewards: #0: 9.464 |
|
[2025-08-14 03:56:42,632][16404] Avg episode reward: 21.179, avg true_objective: 9.464 |
|
[2025-08-14 03:56:42,722][16404] Num frames 6700... |
|
[2025-08-14 03:56:42,838][16404] Num frames 6800... |
|
[2025-08-14 03:56:42,955][16404] Num frames 6900... |
|
[2025-08-14 03:56:43,018][16404] Avg episode rewards: #0: 18.883, true rewards: #0: 8.632 |
|
[2025-08-14 03:56:43,019][16404] Avg episode reward: 18.883, avg true_objective: 8.632 |
|
[2025-08-14 03:56:43,132][16404] Num frames 7000... |
|
[2025-08-14 03:56:43,255][16404] Num frames 7100... |
|
[2025-08-14 03:56:43,376][16404] Num frames 7200... |
|
[2025-08-14 03:56:43,514][16404] Num frames 7300... |
|
[2025-08-14 03:56:43,638][16404] Num frames 7400... |
|
[2025-08-14 03:56:43,717][16404] Avg episode rewards: #0: 17.909, true rewards: #0: 8.242 |
|
[2025-08-14 03:56:43,718][16404] Avg episode reward: 17.909, avg true_objective: 8.242 |
|
[2025-08-14 03:56:43,848][16404] Num frames 7500... |
|
[2025-08-14 03:56:44,017][16404] Num frames 7600... |
|
[2025-08-14 03:56:44,174][16404] Num frames 7700... |
|
[2025-08-14 03:56:44,296][16404] Avg episode rewards: #0: 16.738, true rewards: #0: 7.738 |
|
[2025-08-14 03:56:44,297][16404] Avg episode reward: 16.738, avg true_objective: 7.738 |
|
[2025-08-14 03:57:20,207][16404] Replay video saved to /content/train_dir/default_experiment/vizdoom_eval.mp4! |
|
[2025-08-14 04:01:20,324][16404] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-14 04:01:20,325][16404] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-14 04:01:20,326][16404] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-14 04:01:20,327][16404] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-14 04:01:20,328][16404] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-14 04:01:20,329][16404] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-14 04:01:20,330][16404] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-08-14 04:01:20,331][16404] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-14 04:01:20,332][16404] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-08-14 04:01:20,333][16404] Adding new argument 'hf_repository'='ThomasSimonini/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-08-14 04:01:20,335][16404] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-14 04:01:20,336][16404] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-14 04:01:20,337][16404] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-14 04:01:20,338][16404] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-14 04:01:20,340][16404] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-14 04:01:20,368][16404] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-14 04:01:20,369][16404] RunningMeanStd input shape: (1,) |
|
[2025-08-14 04:01:20,381][16404] ConvEncoder: input_channels=3 |
|
[2025-08-14 04:01:20,420][16404] Conv encoder output size: 512 |
|
[2025-08-14 04:01:20,421][16404] Policy head output size: 512 |
|
[2025-08-14 04:01:20,440][16404] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 04:01:20,898][16404] Num frames 100... |
|
[2025-08-14 04:01:21,063][16404] Num frames 200... |
|
[2025-08-14 04:01:21,201][16404] Num frames 300... |
|
[2025-08-14 04:01:21,340][16404] Num frames 400... |
|
[2025-08-14 04:01:21,476][16404] Num frames 500... |
|
[2025-08-14 04:01:21,611][16404] Num frames 600... |
|
[2025-08-14 04:01:21,746][16404] Num frames 700... |
|
[2025-08-14 04:01:21,880][16404] Num frames 800... |
|
[2025-08-14 04:01:22,017][16404] Num frames 900... |
|
[2025-08-14 04:01:22,161][16404] Num frames 1000... |
|
[2025-08-14 04:01:22,298][16404] Num frames 1100... |
|
[2025-08-14 04:01:22,434][16404] Num frames 1200... |
|
[2025-08-14 04:01:22,572][16404] Num frames 1300... |
|
[2025-08-14 04:01:22,707][16404] Num frames 1400... |
|
[2025-08-14 04:01:22,843][16404] Num frames 1500... |
|
[2025-08-14 04:01:35,415][16404] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-14 04:01:35,416][16404] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-14 04:01:35,418][16404] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-14 04:01:35,420][16404] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-14 04:01:35,421][16404] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-14 04:01:35,423][16404] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-14 04:01:35,424][16404] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-08-14 04:01:35,426][16404] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-14 04:01:35,427][16404] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-08-14 04:01:35,429][16404] Adding new argument 'hf_repository'='Quangvuisme/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-08-14 04:01:35,431][16404] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-14 04:01:35,432][16404] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-14 04:01:35,433][16404] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-14 04:01:35,434][16404] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-14 04:01:35,435][16404] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-14 04:01:35,487][16404] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-14 04:01:35,489][16404] RunningMeanStd input shape: (1,) |
|
[2025-08-14 04:01:35,508][16404] ConvEncoder: input_channels=3 |
|
[2025-08-14 04:01:35,563][16404] Conv encoder output size: 512 |
|
[2025-08-14 04:01:35,564][16404] Policy head output size: 512 |
|
[2025-08-14 04:01:35,591][16404] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-14 04:01:36,209][16404] Num frames 100... |
|
[2025-08-14 04:01:36,404][16404] Num frames 200... |
|
[2025-08-14 04:01:36,622][16404] Num frames 300... |
|
[2025-08-14 04:01:36,759][16404] Num frames 400... |
|
[2025-08-14 04:01:36,905][16404] Num frames 500... |
|
[2025-08-14 04:01:37,043][16404] Num frames 600... |
|
[2025-08-14 04:01:37,181][16404] Num frames 700... |
|
[2025-08-14 04:01:37,326][16404] Num frames 800... |
|
[2025-08-14 04:01:37,466][16404] Num frames 900... |
|
[2025-08-14 04:01:37,614][16404] Num frames 1000... |
|
[2025-08-14 04:01:37,770][16404] Num frames 1100... |
|
[2025-08-14 04:01:37,906][16404] Num frames 1200... |
|
[2025-08-14 04:01:38,042][16404] Num frames 1300... |
|
[2025-08-14 04:01:38,172][16404] Avg episode rewards: #0: 33.510, true rewards: #0: 13.510 |
|
[2025-08-14 04:01:38,173][16404] Avg episode reward: 33.510, avg true_objective: 13.510 |
|
[2025-08-14 04:01:38,241][16404] Num frames 1400... |
|
[2025-08-14 04:01:38,376][16404] Num frames 1500... |
|
[2025-08-14 04:01:38,516][16404] Num frames 1600... |
|
[2025-08-14 04:01:38,654][16404] Num frames 1700... |
|
[2025-08-14 04:01:38,815][16404] Num frames 1800... |
|
[2025-08-14 04:01:38,957][16404] Num frames 1900... |
|
[2025-08-14 04:01:39,099][16404] Num frames 2000... |
|
[2025-08-14 04:01:39,237][16404] Num frames 2100... |
|
[2025-08-14 04:01:39,385][16404] Num frames 2200... |
|
[2025-08-14 04:01:39,530][16404] Num frames 2300... |
|
[2025-08-14 04:01:39,673][16404] Num frames 2400... |
|
[2025-08-14 04:01:39,832][16404] Num frames 2500... |
|
[2025-08-14 04:01:39,978][16404] Num frames 2600... |
|
[2025-08-14 04:01:40,118][16404] Num frames 2700... |
|
[2025-08-14 04:01:40,258][16404] Num frames 2800... |
|
[2025-08-14 04:01:40,399][16404] Num frames 2900... |
|
[2025-08-14 04:01:40,487][16404] Avg episode rewards: #0: 37.110, true rewards: #0: 14.610 |
|
[2025-08-14 04:01:40,488][16404] Avg episode reward: 37.110, avg true_objective: 14.610 |
|
[2025-08-14 04:01:40,601][16404] Num frames 3000... |
|
[2025-08-14 04:01:40,736][16404] Num frames 3100... |
|
[2025-08-14 04:01:40,889][16404] Num frames 3200... |
|
[2025-08-14 04:01:41,032][16404] Num frames 3300... |
|
[2025-08-14 04:01:41,172][16404] Num frames 3400... |
|
[2025-08-14 04:01:41,313][16404] Num frames 3500... |
|
[2025-08-14 04:01:41,450][16404] Num frames 3600... |
|
[2025-08-14 04:01:41,587][16404] Num frames 3700... |
|
[2025-08-14 04:01:41,724][16404] Num frames 3800... |
|
[2025-08-14 04:01:41,863][16404] Avg episode rewards: #0: 31.177, true rewards: #0: 12.843 |
|
[2025-08-14 04:01:41,864][16404] Avg episode reward: 31.177, avg true_objective: 12.843 |
|
[2025-08-14 04:01:41,930][16404] Num frames 3900... |
|
[2025-08-14 04:01:42,067][16404] Num frames 4000... |
|
[2025-08-14 04:01:42,204][16404] Num frames 4100... |
|
[2025-08-14 04:01:42,346][16404] Num frames 4200... |
|
[2025-08-14 04:01:42,484][16404] Num frames 4300... |
|
[2025-08-14 04:01:42,622][16404] Num frames 4400... |
|
[2025-08-14 04:01:42,760][16404] Num frames 4500... |
|
[2025-08-14 04:01:42,911][16404] Num frames 4600... |
|
[2025-08-14 04:01:43,050][16404] Num frames 4700... |
|
[2025-08-14 04:01:43,190][16404] Num frames 4800... |
|
[2025-08-14 04:01:43,329][16404] Num frames 4900... |
|
[2025-08-14 04:01:43,468][16404] Num frames 5000... |
|
[2025-08-14 04:01:43,605][16404] Num frames 5100... |
|
[2025-08-14 04:01:43,745][16404] Num frames 5200... |
|
[2025-08-14 04:01:43,882][16404] Avg episode rewards: #0: 30.645, true rewards: #0: 13.145 |
|
[2025-08-14 04:01:43,883][16404] Avg episode reward: 30.645, avg true_objective: 13.145 |
|
[2025-08-14 04:01:43,941][16404] Num frames 5300... |
|
[2025-08-14 04:01:44,079][16404] Num frames 5400... |
|
[2025-08-14 04:01:44,219][16404] Num frames 5500... |
|
[2025-08-14 04:01:44,362][16404] Num frames 5600... |
|
[2025-08-14 04:01:44,502][16404] Num frames 5700... |
|
[2025-08-14 04:01:44,640][16404] Num frames 5800... |
|
[2025-08-14 04:01:44,821][16404] Avg episode rewards: #0: 27.182, true rewards: #0: 11.782 |
|
[2025-08-14 04:01:44,822][16404] Avg episode reward: 27.182, avg true_objective: 11.782 |
|
[2025-08-14 04:01:44,837][16404] Num frames 5900... |
|
[2025-08-14 04:01:44,988][16404] Num frames 6000... |
|
[2025-08-14 04:01:45,130][16404] Num frames 6100... |
|
[2025-08-14 04:01:45,270][16404] Num frames 6200... |
|
[2025-08-14 04:01:45,408][16404] Num frames 6300... |
|
[2025-08-14 04:01:45,546][16404] Num frames 6400... |
|
[2025-08-14 04:01:45,607][16404] Avg episode rewards: #0: 23.838, true rewards: #0: 10.672 |
|
[2025-08-14 04:01:45,608][16404] Avg episode reward: 23.838, avg true_objective: 10.672 |
|
[2025-08-14 04:01:45,741][16404] Num frames 6500... |
|
[2025-08-14 04:01:45,880][16404] Num frames 6600... |
|
[2025-08-14 04:01:46,029][16404] Num frames 6700... |
|
[2025-08-14 04:01:46,171][16404] Num frames 6800... |
|
[2025-08-14 04:01:46,309][16404] Num frames 6900... |
|
[2025-08-14 04:01:46,452][16404] Num frames 7000... |
|
[2025-08-14 04:01:46,590][16404] Num frames 7100... |
|
[2025-08-14 04:01:46,773][16404] Num frames 7200... |
|
[2025-08-14 04:01:46,959][16404] Num frames 7300... |
|
[2025-08-14 04:01:47,149][16404] Num frames 7400... |
|
[2025-08-14 04:01:47,390][16404] Avg episode rewards: #0: 23.559, true rewards: #0: 10.701 |
|
[2025-08-14 04:01:47,391][16404] Avg episode reward: 23.559, avg true_objective: 10.701 |
|
[2025-08-14 04:01:47,412][16404] Num frames 7500... |
|
[2025-08-14 04:01:47,592][16404] Num frames 7600... |
|
[2025-08-14 04:01:47,765][16404] Num frames 7700... |
|
[2025-08-14 04:01:47,938][16404] Num frames 7800... |
|
[2025-08-14 04:01:48,155][16404] Num frames 7900... |
|
[2025-08-14 04:01:48,352][16404] Num frames 8000... |
|
[2025-08-14 04:01:48,542][16404] Num frames 8100... |
|
[2025-08-14 04:01:48,746][16404] Num frames 8200... |
|
[2025-08-14 04:01:48,946][16404] Num frames 8300... |
|
[2025-08-14 04:01:49,115][16404] Num frames 8400... |
|
[2025-08-14 04:01:49,264][16404] Num frames 8500... |
|
[2025-08-14 04:01:49,400][16404] Num frames 8600... |
|
[2025-08-14 04:01:49,537][16404] Num frames 8700... |
|
[2025-08-14 04:01:49,603][16404] Avg episode rewards: #0: 24.009, true rewards: #0: 10.884 |
|
[2025-08-14 04:01:49,604][16404] Avg episode reward: 24.009, avg true_objective: 10.884 |
|
[2025-08-14 04:01:49,726][16404] Num frames 8800... |
|
[2025-08-14 04:01:49,857][16404] Num frames 8900... |
|
[2025-08-14 04:01:49,994][16404] Avg episode rewards: #0: 21.626, true rewards: #0: 9.959 |
|
[2025-08-14 04:01:49,995][16404] Avg episode reward: 21.626, avg true_objective: 9.959 |
|
[2025-08-14 04:01:50,049][16404] Num frames 9000... |
|
[2025-08-14 04:01:50,189][16404] Num frames 9100... |
|
[2025-08-14 04:01:50,334][16404] Num frames 9200... |
|
[2025-08-14 04:01:50,468][16404] Num frames 9300... |
|
[2025-08-14 04:01:50,603][16404] Num frames 9400... |
|
[2025-08-14 04:01:50,741][16404] Num frames 9500... |
|
[2025-08-14 04:01:50,806][16404] Avg episode rewards: #0: 20.407, true rewards: #0: 9.507 |
|
[2025-08-14 04:01:50,808][16404] Avg episode reward: 20.407, avg true_objective: 9.507 |
|
[2025-08-14 04:02:55,376][16404] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|