|
[2025-02-20 04:04:12,863][01130] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-02-20 04:04:12,869][01130] Rollout worker 0 uses device cpu |
|
[2025-02-20 04:04:12,869][01130] Rollout worker 1 uses device cpu |
|
[2025-02-20 04:04:12,871][01130] Rollout worker 2 uses device cpu |
|
[2025-02-20 04:04:12,873][01130] Rollout worker 3 uses device cpu |
|
[2025-02-20 04:04:12,875][01130] Rollout worker 4 uses device cpu |
|
[2025-02-20 04:04:12,877][01130] Rollout worker 5 uses device cpu |
|
[2025-02-20 04:04:12,879][01130] Rollout worker 6 uses device cpu |
|
[2025-02-20 04:04:12,881][01130] Rollout worker 7 uses device cpu |
|
[2025-02-20 04:04:13,109][01130] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-20 04:04:13,112][01130] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-02-20 04:04:13,157][01130] Starting all processes... |
|
[2025-02-20 04:04:13,161][01130] Starting process learner_proc0 |
|
[2025-02-20 04:04:13,276][01130] Starting all processes... |
|
[2025-02-20 04:04:13,395][01130] Starting process inference_proc0-0 |
|
[2025-02-20 04:04:13,397][01130] Starting process rollout_proc0 |
|
[2025-02-20 04:04:13,397][01130] Starting process rollout_proc1 |
|
[2025-02-20 04:04:13,397][01130] Starting process rollout_proc2 |
|
[2025-02-20 04:04:13,400][01130] Starting process rollout_proc3 |
|
[2025-02-20 04:04:13,400][01130] Starting process rollout_proc4 |
|
[2025-02-20 04:04:13,400][01130] Starting process rollout_proc5 |
|
[2025-02-20 04:04:13,400][01130] Starting process rollout_proc6 |
|
[2025-02-20 04:04:13,401][01130] Starting process rollout_proc7 |
|
[2025-02-20 04:04:28,681][03375] Worker 1 uses CPU cores [1] |
|
[2025-02-20 04:04:29,398][03356] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-20 04:04:29,399][03356] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-02-20 04:04:29,524][03356] Num visible devices: 1 |
|
[2025-02-20 04:04:29,562][03356] Starting seed is not provided |
|
[2025-02-20 04:04:29,562][03356] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-20 04:04:29,563][03356] Initializing actor-critic model on device cuda:0 |
|
[2025-02-20 04:04:29,564][03356] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-20 04:04:29,567][03356] RunningMeanStd input shape: (1,) |
|
[2025-02-20 04:04:29,707][03376] Worker 2 uses CPU cores [0] |
|
[2025-02-20 04:04:29,745][03356] ConvEncoder: input_channels=3 |
|
[2025-02-20 04:04:29,938][03378] Worker 4 uses CPU cores [0] |
|
[2025-02-20 04:04:29,993][03380] Worker 6 uses CPU cores [0] |
|
[2025-02-20 04:04:30,018][03379] Worker 5 uses CPU cores [1] |
|
[2025-02-20 04:04:30,188][03374] Worker 0 uses CPU cores [0] |
|
[2025-02-20 04:04:30,188][03377] Worker 3 uses CPU cores [1] |
|
[2025-02-20 04:04:30,306][03373] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-20 04:04:30,306][03373] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-02-20 04:04:30,344][03381] Worker 7 uses CPU cores [1] |
|
[2025-02-20 04:04:30,367][03373] Num visible devices: 1 |
|
[2025-02-20 04:04:30,452][03356] Conv encoder output size: 512 |
|
[2025-02-20 04:04:30,452][03356] Policy head output size: 512 |
|
[2025-02-20 04:04:30,528][03356] Created Actor Critic model with architecture: |
|
[2025-02-20 04:04:30,529][03356] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-02-20 04:04:30,855][03356] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-02-20 04:04:33,102][01130] Heartbeat connected on Batcher_0 |
|
[2025-02-20 04:04:33,109][01130] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-02-20 04:04:33,121][01130] Heartbeat connected on RolloutWorker_w0 |
|
[2025-02-20 04:04:33,131][01130] Heartbeat connected on RolloutWorker_w2 |
|
[2025-02-20 04:04:33,132][01130] Heartbeat connected on RolloutWorker_w1 |
|
[2025-02-20 04:04:33,141][01130] Heartbeat connected on RolloutWorker_w4 |
|
[2025-02-20 04:04:33,141][01130] Heartbeat connected on RolloutWorker_w3 |
|
[2025-02-20 04:04:33,147][01130] Heartbeat connected on RolloutWorker_w5 |
|
[2025-02-20 04:04:33,152][01130] Heartbeat connected on RolloutWorker_w6 |
|
[2025-02-20 04:04:33,158][01130] Heartbeat connected on RolloutWorker_w7 |
|
[2025-02-20 04:04:35,243][03356] No checkpoints found |
|
[2025-02-20 04:04:35,243][03356] Did not load from checkpoint, starting from scratch! |
|
[2025-02-20 04:04:35,244][03356] Initialized policy 0 weights for model version 0 |
|
[2025-02-20 04:04:35,247][03356] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-20 04:04:35,266][03356] LearnerWorker_p0 finished initialization! |
|
[2025-02-20 04:04:35,267][01130] Heartbeat connected on LearnerWorker_p0 |
|
[2025-02-20 04:04:35,407][03373] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-20 04:04:35,408][03373] RunningMeanStd input shape: (1,) |
|
[2025-02-20 04:04:35,419][03373] ConvEncoder: input_channels=3 |
|
[2025-02-20 04:04:35,521][03373] Conv encoder output size: 512 |
|
[2025-02-20 04:04:35,521][03373] Policy head output size: 512 |
|
[2025-02-20 04:04:35,556][01130] Inference worker 0-0 is ready! |
|
[2025-02-20 04:04:35,557][01130] All inference workers are ready! Signal rollout workers to start! |
|
[2025-02-20 04:04:35,815][03376] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-20 04:04:35,827][03381] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-20 04:04:35,845][03375] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-20 04:04:35,934][03380] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-20 04:04:35,938][03374] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-20 04:04:35,959][03377] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-20 04:04:35,976][03378] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-20 04:04:35,993][03379] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-20 04:04:37,234][03374] Decorrelating experience for 0 frames... |
|
[2025-02-20 04:04:37,235][03375] Decorrelating experience for 0 frames... |
|
[2025-02-20 04:04:37,236][03377] Decorrelating experience for 0 frames... |
|
[2025-02-20 04:04:37,603][03378] Decorrelating experience for 0 frames... |
|
[2025-02-20 04:04:37,975][03378] Decorrelating experience for 32 frames... |
|
[2025-02-20 04:04:38,218][03377] Decorrelating experience for 32 frames... |
|
[2025-02-20 04:04:38,220][03375] Decorrelating experience for 32 frames... |
|
[2025-02-20 04:04:38,733][03379] Decorrelating experience for 0 frames... |
|
[2025-02-20 04:04:38,868][03378] Decorrelating experience for 64 frames... |
|
[2025-02-20 04:04:38,927][01130] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-02-20 04:04:39,100][03374] Decorrelating experience for 32 frames... |
|
[2025-02-20 04:04:39,740][03377] Decorrelating experience for 64 frames... |
|
[2025-02-20 04:04:39,743][03379] Decorrelating experience for 32 frames... |
|
[2025-02-20 04:04:39,769][03378] Decorrelating experience for 96 frames... |
|
[2025-02-20 04:04:40,133][03374] Decorrelating experience for 64 frames... |
|
[2025-02-20 04:04:40,416][03375] Decorrelating experience for 64 frames... |
|
[2025-02-20 04:04:41,028][03374] Decorrelating experience for 96 frames... |
|
[2025-02-20 04:04:41,147][03377] Decorrelating experience for 96 frames... |
|
[2025-02-20 04:04:41,545][03379] Decorrelating experience for 64 frames... |
|
[2025-02-20 04:04:42,111][03375] Decorrelating experience for 96 frames... |
|
[2025-02-20 04:04:43,890][03379] Decorrelating experience for 96 frames... |
|
[2025-02-20 04:04:43,927][01130] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 33.6. Samples: 168. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-02-20 04:04:43,929][01130] Avg episode reward: [(0, '2.336')] |
|
[2025-02-20 04:04:46,138][03356] Signal inference workers to stop experience collection... |
|
[2025-02-20 04:04:46,154][03373] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-02-20 04:04:47,587][03356] Signal inference workers to resume experience collection... |
|
[2025-02-20 04:04:47,589][03373] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-02-20 04:04:48,927][01130] Fps is (10 sec: 1228.8, 60 sec: 1228.8, 300 sec: 1228.8). Total num frames: 12288. Throughput: 0: 246.4. Samples: 2464. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2025-02-20 04:04:48,929][01130] Avg episode reward: [(0, '3.583')] |
|
[2025-02-20 04:04:53,927][01130] Fps is (10 sec: 3276.8, 60 sec: 2184.5, 300 sec: 2184.5). Total num frames: 32768. Throughput: 0: 578.1. Samples: 8672. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:04:53,932][01130] Avg episode reward: [(0, '4.136')] |
|
[2025-02-20 04:04:55,514][03373] Updated weights for policy 0, policy_version 10 (0.0014) |
|
[2025-02-20 04:04:58,927][01130] Fps is (10 sec: 3686.4, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 49152. Throughput: 0: 568.7. Samples: 11374. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:04:58,932][01130] Avg episode reward: [(0, '4.500')] |
|
[2025-02-20 04:05:03,927][01130] Fps is (10 sec: 3686.4, 60 sec: 2785.3, 300 sec: 2785.3). Total num frames: 69632. Throughput: 0: 665.2. Samples: 16630. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:05:03,930][01130] Avg episode reward: [(0, '4.466')] |
|
[2025-02-20 04:05:06,450][03373] Updated weights for policy 0, policy_version 20 (0.0015) |
|
[2025-02-20 04:05:08,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3003.7, 300 sec: 3003.7). Total num frames: 90112. Throughput: 0: 752.1. Samples: 22562. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:05:08,929][01130] Avg episode reward: [(0, '4.433')] |
|
[2025-02-20 04:05:13,927][01130] Fps is (10 sec: 3276.8, 60 sec: 2925.7, 300 sec: 2925.7). Total num frames: 102400. Throughput: 0: 706.7. Samples: 24736. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:05:13,932][01130] Avg episode reward: [(0, '4.250')] |
|
[2025-02-20 04:05:13,940][03356] Saving new best policy, reward=4.250! |
|
[2025-02-20 04:05:18,278][03373] Updated weights for policy 0, policy_version 30 (0.0021) |
|
[2025-02-20 04:05:18,927][01130] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 3072.0). Total num frames: 122880. Throughput: 0: 755.5. Samples: 30218. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:05:18,932][01130] Avg episode reward: [(0, '4.288')] |
|
[2025-02-20 04:05:18,935][03356] Saving new best policy, reward=4.288! |
|
[2025-02-20 04:05:23,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3185.8, 300 sec: 3185.8). Total num frames: 143360. Throughput: 0: 808.8. Samples: 36396. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:05:23,932][01130] Avg episode reward: [(0, '4.431')] |
|
[2025-02-20 04:05:23,940][03356] Saving new best policy, reward=4.431! |
|
[2025-02-20 04:05:28,927][01130] Fps is (10 sec: 3276.8, 60 sec: 3113.0, 300 sec: 3113.0). Total num frames: 155648. Throughput: 0: 848.6. Samples: 38356. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:05:28,929][01130] Avg episode reward: [(0, '4.355')] |
|
[2025-02-20 04:05:30,678][03373] Updated weights for policy 0, policy_version 40 (0.0017) |
|
[2025-02-20 04:05:33,928][01130] Fps is (10 sec: 3276.7, 60 sec: 3202.3, 300 sec: 3202.3). Total num frames: 176128. Throughput: 0: 908.5. Samples: 43346. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:05:33,929][01130] Avg episode reward: [(0, '4.327')] |
|
[2025-02-20 04:05:38,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3208.5). Total num frames: 192512. Throughput: 0: 898.1. Samples: 49088. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:05:38,929][01130] Avg episode reward: [(0, '4.592')] |
|
[2025-02-20 04:05:38,931][03356] Saving new best policy, reward=4.592! |
|
[2025-02-20 04:05:42,328][03373] Updated weights for policy 0, policy_version 50 (0.0016) |
|
[2025-02-20 04:05:43,927][01130] Fps is (10 sec: 3276.9, 60 sec: 3481.6, 300 sec: 3213.8). Total num frames: 208896. Throughput: 0: 878.6. Samples: 50910. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:05:43,930][01130] Avg episode reward: [(0, '4.478')] |
|
[2025-02-20 04:05:48,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3276.8). Total num frames: 229376. Throughput: 0: 897.8. Samples: 57030. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:05:48,933][01130] Avg episode reward: [(0, '4.301')] |
|
[2025-02-20 04:05:51,907][03373] Updated weights for policy 0, policy_version 60 (0.0015) |
|
[2025-02-20 04:05:53,929][01130] Fps is (10 sec: 4095.3, 60 sec: 3618.0, 300 sec: 3331.3). Total num frames: 249856. Throughput: 0: 894.7. Samples: 62824. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:05:53,936][01130] Avg episode reward: [(0, '4.340')] |
|
[2025-02-20 04:05:58,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3328.0). Total num frames: 266240. Throughput: 0: 895.0. Samples: 65010. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:05:58,929][01130] Avg episode reward: [(0, '4.275')] |
|
[2025-02-20 04:06:03,111][03373] Updated weights for policy 0, policy_version 70 (0.0015) |
|
[2025-02-20 04:06:03,927][01130] Fps is (10 sec: 3687.0, 60 sec: 3618.1, 300 sec: 3373.2). Total num frames: 286720. Throughput: 0: 917.9. Samples: 71524. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:06:03,929][01130] Avg episode reward: [(0, '4.434')] |
|
[2025-02-20 04:06:03,933][03356] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000070_286720.pth... |
|
[2025-02-20 04:06:08,928][01130] Fps is (10 sec: 3686.3, 60 sec: 3549.8, 300 sec: 3367.8). Total num frames: 303104. Throughput: 0: 897.7. Samples: 76794. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:06:08,932][01130] Avg episode reward: [(0, '4.433')] |
|
[2025-02-20 04:06:13,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3406.1). Total num frames: 323584. Throughput: 0: 909.3. Samples: 79274. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:06:13,929][01130] Avg episode reward: [(0, '4.529')] |
|
[2025-02-20 04:06:14,315][03373] Updated weights for policy 0, policy_version 80 (0.0015) |
|
[2025-02-20 04:06:18,927][01130] Fps is (10 sec: 4096.2, 60 sec: 3686.4, 300 sec: 3440.6). Total num frames: 344064. Throughput: 0: 940.9. Samples: 85688. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:06:18,929][01130] Avg episode reward: [(0, '4.474')] |
|
[2025-02-20 04:06:23,929][01130] Fps is (10 sec: 3685.6, 60 sec: 3618.0, 300 sec: 3432.8). Total num frames: 360448. Throughput: 0: 925.9. Samples: 90756. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:06:23,934][01130] Avg episode reward: [(0, '4.439')] |
|
[2025-02-20 04:06:25,471][03373] Updated weights for policy 0, policy_version 90 (0.0016) |
|
[2025-02-20 04:06:28,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3463.0). Total num frames: 380928. Throughput: 0: 950.4. Samples: 93680. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:06:28,931][01130] Avg episode reward: [(0, '4.493')] |
|
[2025-02-20 04:06:33,927][01130] Fps is (10 sec: 4096.8, 60 sec: 3754.7, 300 sec: 3490.5). Total num frames: 401408. Throughput: 0: 958.2. Samples: 100150. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:06:33,931][01130] Avg episode reward: [(0, '4.458')] |
|
[2025-02-20 04:06:35,589][03373] Updated weights for policy 0, policy_version 100 (0.0020) |
|
[2025-02-20 04:06:38,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3481.6). Total num frames: 417792. Throughput: 0: 935.7. Samples: 104930. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:06:38,932][01130] Avg episode reward: [(0, '4.354')] |
|
[2025-02-20 04:06:43,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3506.2). Total num frames: 438272. Throughput: 0: 957.8. Samples: 108112. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:06:43,932][01130] Avg episode reward: [(0, '4.564')] |
|
[2025-02-20 04:06:46,205][03373] Updated weights for policy 0, policy_version 110 (0.0015) |
|
[2025-02-20 04:06:48,929][01130] Fps is (10 sec: 4095.4, 60 sec: 3822.8, 300 sec: 3528.8). Total num frames: 458752. Throughput: 0: 955.9. Samples: 114542. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:06:48,932][01130] Avg episode reward: [(0, '4.703')] |
|
[2025-02-20 04:06:48,935][03356] Saving new best policy, reward=4.703! |
|
[2025-02-20 04:06:53,928][01130] Fps is (10 sec: 3686.3, 60 sec: 3754.8, 300 sec: 3519.5). Total num frames: 475136. Throughput: 0: 945.0. Samples: 119320. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:06:53,929][01130] Avg episode reward: [(0, '4.681')] |
|
[2025-02-20 04:06:57,114][03373] Updated weights for policy 0, policy_version 120 (0.0026) |
|
[2025-02-20 04:06:58,927][01130] Fps is (10 sec: 3686.9, 60 sec: 3822.9, 300 sec: 3540.1). Total num frames: 495616. Throughput: 0: 961.4. Samples: 122536. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:06:58,932][01130] Avg episode reward: [(0, '4.616')] |
|
[2025-02-20 04:07:03,934][01130] Fps is (10 sec: 4093.5, 60 sec: 3822.5, 300 sec: 3559.1). Total num frames: 516096. Throughput: 0: 958.6. Samples: 128830. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:07:03,935][01130] Avg episode reward: [(0, '4.650')] |
|
[2025-02-20 04:07:08,298][03373] Updated weights for policy 0, policy_version 130 (0.0017) |
|
[2025-02-20 04:07:08,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3549.9). Total num frames: 532480. Throughput: 0: 952.8. Samples: 133632. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:07:08,929][01130] Avg episode reward: [(0, '4.679')] |
|
[2025-02-20 04:07:13,927][01130] Fps is (10 sec: 3688.8, 60 sec: 3822.9, 300 sec: 3567.5). Total num frames: 552960. Throughput: 0: 955.1. Samples: 136658. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:07:13,929][01130] Avg episode reward: [(0, '4.733')] |
|
[2025-02-20 04:07:13,936][03356] Saving new best policy, reward=4.733! |
|
[2025-02-20 04:07:18,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3558.4). Total num frames: 569344. Throughput: 0: 941.4. Samples: 142512. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:07:18,934][01130] Avg episode reward: [(0, '4.604')] |
|
[2025-02-20 04:07:19,550][03373] Updated weights for policy 0, policy_version 140 (0.0014) |
|
[2025-02-20 04:07:23,928][01130] Fps is (10 sec: 3686.3, 60 sec: 3823.0, 300 sec: 3574.7). Total num frames: 589824. Throughput: 0: 952.1. Samples: 147774. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:07:23,932][01130] Avg episode reward: [(0, '4.593')] |
|
[2025-02-20 04:07:28,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3590.0). Total num frames: 610304. Throughput: 0: 950.4. Samples: 150878. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:07:28,929][01130] Avg episode reward: [(0, '4.464')] |
|
[2025-02-20 04:07:29,423][03373] Updated weights for policy 0, policy_version 150 (0.0014) |
|
[2025-02-20 04:07:33,932][01130] Fps is (10 sec: 3684.9, 60 sec: 3754.4, 300 sec: 3581.0). Total num frames: 626688. Throughput: 0: 933.5. Samples: 156554. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:07:33,935][01130] Avg episode reward: [(0, '4.438')] |
|
[2025-02-20 04:07:38,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3595.4). Total num frames: 647168. Throughput: 0: 952.3. Samples: 162174. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:07:38,932][01130] Avg episode reward: [(0, '4.495')] |
|
[2025-02-20 04:07:40,394][03373] Updated weights for policy 0, policy_version 160 (0.0015) |
|
[2025-02-20 04:07:43,927][01130] Fps is (10 sec: 3688.0, 60 sec: 3754.7, 300 sec: 3586.8). Total num frames: 663552. Throughput: 0: 942.1. Samples: 164932. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:07:43,929][01130] Avg episode reward: [(0, '4.658')] |
|
[2025-02-20 04:07:48,927][01130] Fps is (10 sec: 2867.2, 60 sec: 3618.2, 300 sec: 3557.1). Total num frames: 675840. Throughput: 0: 893.6. Samples: 169034. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:07:48,929][01130] Avg episode reward: [(0, '4.838')] |
|
[2025-02-20 04:07:48,931][03356] Saving new best policy, reward=4.838! |
|
[2025-02-20 04:07:52,935][03373] Updated weights for policy 0, policy_version 170 (0.0022) |
|
[2025-02-20 04:07:53,927][01130] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3570.9). Total num frames: 696320. Throughput: 0: 918.9. Samples: 174984. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:07:53,928][01130] Avg episode reward: [(0, '5.085')] |
|
[2025-02-20 04:07:53,994][03356] Saving new best policy, reward=5.085! |
|
[2025-02-20 04:07:58,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3584.0). Total num frames: 716800. Throughput: 0: 920.5. Samples: 178080. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:07:58,935][01130] Avg episode reward: [(0, '5.058')] |
|
[2025-02-20 04:08:03,928][01130] Fps is (10 sec: 3686.3, 60 sec: 3618.5, 300 sec: 3576.5). Total num frames: 733184. Throughput: 0: 897.0. Samples: 182878. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:08:03,932][01130] Avg episode reward: [(0, '4.869')] |
|
[2025-02-20 04:08:03,939][03356] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000179_733184.pth... |
|
[2025-02-20 04:08:04,304][03373] Updated weights for policy 0, policy_version 180 (0.0022) |
|
[2025-02-20 04:08:08,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3588.9). Total num frames: 753664. Throughput: 0: 920.3. Samples: 189188. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:08:08,929][01130] Avg episode reward: [(0, '4.894')] |
|
[2025-02-20 04:08:13,928][01130] Fps is (10 sec: 4095.8, 60 sec: 3686.4, 300 sec: 3600.7). Total num frames: 774144. Throughput: 0: 924.4. Samples: 192478. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:08:13,931][01130] Avg episode reward: [(0, '5.050')] |
|
[2025-02-20 04:08:14,632][03373] Updated weights for policy 0, policy_version 190 (0.0018) |
|
[2025-02-20 04:08:18,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3593.3). Total num frames: 790528. Throughput: 0: 903.8. Samples: 197222. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:08:18,929][01130] Avg episode reward: [(0, '5.040')] |
|
[2025-02-20 04:08:23,927][01130] Fps is (10 sec: 4096.3, 60 sec: 3754.7, 300 sec: 3622.7). Total num frames: 815104. Throughput: 0: 924.7. Samples: 203786. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:08:23,929][01130] Avg episode reward: [(0, '5.164')] |
|
[2025-02-20 04:08:23,935][03356] Saving new best policy, reward=5.164! |
|
[2025-02-20 04:08:24,903][03373] Updated weights for policy 0, policy_version 200 (0.0023) |
|
[2025-02-20 04:08:28,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3615.2). Total num frames: 831488. Throughput: 0: 932.6. Samples: 206900. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:08:28,929][01130] Avg episode reward: [(0, '4.907')] |
|
[2025-02-20 04:08:33,931][01130] Fps is (10 sec: 3275.7, 60 sec: 3686.5, 300 sec: 3607.9). Total num frames: 847872. Throughput: 0: 947.6. Samples: 211680. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:08:33,932][01130] Avg episode reward: [(0, '4.972')] |
|
[2025-02-20 04:08:35,980][03373] Updated weights for policy 0, policy_version 210 (0.0019) |
|
[2025-02-20 04:08:38,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3618.1). Total num frames: 868352. Throughput: 0: 957.2. Samples: 218060. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:08:38,929][01130] Avg episode reward: [(0, '5.167')] |
|
[2025-02-20 04:08:38,944][03356] Saving new best policy, reward=5.167! |
|
[2025-02-20 04:08:43,927][01130] Fps is (10 sec: 3687.6, 60 sec: 3686.4, 300 sec: 3611.2). Total num frames: 884736. Throughput: 0: 951.0. Samples: 220876. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:08:43,932][01130] Avg episode reward: [(0, '5.192')] |
|
[2025-02-20 04:08:43,941][03356] Saving new best policy, reward=5.192! |
|
[2025-02-20 04:08:47,159][03373] Updated weights for policy 0, policy_version 220 (0.0017) |
|
[2025-02-20 04:08:48,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3620.9). Total num frames: 905216. Throughput: 0: 960.8. Samples: 226114. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:08:48,939][01130] Avg episode reward: [(0, '5.334')] |
|
[2025-02-20 04:08:48,980][03356] Saving new best policy, reward=5.334! |
|
[2025-02-20 04:08:53,927][01130] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3646.2). Total num frames: 929792. Throughput: 0: 965.4. Samples: 232630. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:08:53,932][01130] Avg episode reward: [(0, '5.564')] |
|
[2025-02-20 04:08:53,939][03356] Saving new best policy, reward=5.564! |
|
[2025-02-20 04:08:57,742][03373] Updated weights for policy 0, policy_version 230 (0.0016) |
|
[2025-02-20 04:08:58,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3623.4). Total num frames: 942080. Throughput: 0: 945.9. Samples: 235044. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:08:58,929][01130] Avg episode reward: [(0, '5.669')] |
|
[2025-02-20 04:08:58,930][03356] Saving new best policy, reward=5.669! |
|
[2025-02-20 04:09:03,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3647.8). Total num frames: 966656. Throughput: 0: 964.2. Samples: 240612. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:09:03,929][01130] Avg episode reward: [(0, '5.602')] |
|
[2025-02-20 04:09:07,792][03373] Updated weights for policy 0, policy_version 240 (0.0015) |
|
[2025-02-20 04:09:08,927][01130] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3656.1). Total num frames: 987136. Throughput: 0: 960.1. Samples: 246990. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:09:08,932][01130] Avg episode reward: [(0, '5.661')] |
|
[2025-02-20 04:09:13,927][01130] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3634.3). Total num frames: 999424. Throughput: 0: 935.7. Samples: 249006. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:09:13,931][01130] Avg episode reward: [(0, '5.745')] |
|
[2025-02-20 04:09:13,936][03356] Saving new best policy, reward=5.745! |
|
[2025-02-20 04:09:18,798][03373] Updated weights for policy 0, policy_version 250 (0.0013) |
|
[2025-02-20 04:09:18,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3657.1). Total num frames: 1024000. Throughput: 0: 962.8. Samples: 255002. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:09:18,928][01130] Avg episode reward: [(0, '6.172')] |
|
[2025-02-20 04:09:18,933][03356] Saving new best policy, reward=6.172! |
|
[2025-02-20 04:09:23,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3650.5). Total num frames: 1040384. Throughput: 0: 953.4. Samples: 260964. Policy #0 lag: (min: 0.0, avg: 0.2, max: 2.0) |
|
[2025-02-20 04:09:23,929][01130] Avg episode reward: [(0, '6.599')] |
|
[2025-02-20 04:09:23,938][03356] Saving new best policy, reward=6.599! |
|
[2025-02-20 04:09:28,927][01130] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3658.2). Total num frames: 1060864. Throughput: 0: 939.7. Samples: 263162. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:09:28,929][01130] Avg episode reward: [(0, '7.046')] |
|
[2025-02-20 04:09:28,933][03356] Saving new best policy, reward=7.046! |
|
[2025-02-20 04:09:29,777][03373] Updated weights for policy 0, policy_version 260 (0.0020) |
|
[2025-02-20 04:09:33,928][01130] Fps is (10 sec: 4095.8, 60 sec: 3891.4, 300 sec: 3665.6). Total num frames: 1081344. Throughput: 0: 964.7. Samples: 269526. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:09:33,929][01130] Avg episode reward: [(0, '6.941')] |
|
[2025-02-20 04:09:38,927][01130] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 1097728. Throughput: 0: 940.5. Samples: 274952. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:09:38,932][01130] Avg episode reward: [(0, '6.503')] |
|
[2025-02-20 04:09:40,616][03373] Updated weights for policy 0, policy_version 270 (0.0026) |
|
[2025-02-20 04:09:43,927][01130] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 1118208. Throughput: 0: 949.9. Samples: 277790. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:09:43,929][01130] Avg episode reward: [(0, '6.834')] |
|
[2025-02-20 04:09:48,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 1138688. Throughput: 0: 965.9. Samples: 284078. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:09:48,932][01130] Avg episode reward: [(0, '6.612')] |
|
[2025-02-20 04:09:50,458][03373] Updated weights for policy 0, policy_version 280 (0.0015) |
|
[2025-02-20 04:09:53,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1155072. Throughput: 0: 936.6. Samples: 289136. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:09:53,933][01130] Avg episode reward: [(0, '6.763')] |
|
[2025-02-20 04:09:58,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 1175552. Throughput: 0: 961.2. Samples: 292262. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:09:58,932][01130] Avg episode reward: [(0, '6.976')] |
|
[2025-02-20 04:10:01,224][03373] Updated weights for policy 0, policy_version 290 (0.0012) |
|
[2025-02-20 04:10:03,928][01130] Fps is (10 sec: 4095.8, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 1196032. Throughput: 0: 973.1. Samples: 298790. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:10:03,930][01130] Avg episode reward: [(0, '7.510')] |
|
[2025-02-20 04:10:03,942][03356] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000292_1196032.pth... |
|
[2025-02-20 04:10:04,066][03356] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000070_286720.pth |
|
[2025-02-20 04:10:04,081][03356] Saving new best policy, reward=7.510! |
|
[2025-02-20 04:10:08,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 1212416. Throughput: 0: 946.5. Samples: 303556. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:10:08,929][01130] Avg episode reward: [(0, '7.480')] |
|
[2025-02-20 04:10:12,308][03373] Updated weights for policy 0, policy_version 300 (0.0014) |
|
[2025-02-20 04:10:13,927][01130] Fps is (10 sec: 3686.6, 60 sec: 3891.2, 300 sec: 3762.8). Total num frames: 1232896. Throughput: 0: 967.2. Samples: 306684. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:10:13,929][01130] Avg episode reward: [(0, '6.993')] |
|
[2025-02-20 04:10:18,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 1253376. Throughput: 0: 966.6. Samples: 313024. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:10:18,929][01130] Avg episode reward: [(0, '6.856')] |
|
[2025-02-20 04:10:23,175][03373] Updated weights for policy 0, policy_version 310 (0.0013) |
|
[2025-02-20 04:10:23,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 1269760. Throughput: 0: 961.3. Samples: 318210. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:10:23,933][01130] Avg episode reward: [(0, '6.860')] |
|
[2025-02-20 04:10:28,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3790.5). Total num frames: 1294336. Throughput: 0: 971.6. Samples: 321512. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:10:28,928][01130] Avg episode reward: [(0, '7.346')] |
|
[2025-02-20 04:10:33,932][01130] Fps is (10 sec: 3684.6, 60 sec: 3754.4, 300 sec: 3776.6). Total num frames: 1306624. Throughput: 0: 955.4. Samples: 327074. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:10:33,933][01130] Avg episode reward: [(0, '7.675')] |
|
[2025-02-20 04:10:33,971][03356] Saving new best policy, reward=7.675! |
|
[2025-02-20 04:10:33,981][03373] Updated weights for policy 0, policy_version 320 (0.0016) |
|
[2025-02-20 04:10:38,927][01130] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 1327104. Throughput: 0: 958.0. Samples: 332246. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:10:38,929][01130] Avg episode reward: [(0, '7.547')] |
|
[2025-02-20 04:10:43,927][01130] Fps is (10 sec: 4098.0, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 1347584. Throughput: 0: 958.8. Samples: 335406. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:10:43,929][01130] Avg episode reward: [(0, '7.549')] |
|
[2025-02-20 04:10:44,242][03373] Updated weights for policy 0, policy_version 330 (0.0013) |
|
[2025-02-20 04:10:48,928][01130] Fps is (10 sec: 3686.3, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 1363968. Throughput: 0: 929.6. Samples: 340622. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:10:48,929][01130] Avg episode reward: [(0, '8.482')] |
|
[2025-02-20 04:10:48,934][03356] Saving new best policy, reward=8.482! |
|
[2025-02-20 04:10:53,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 1384448. Throughput: 0: 950.8. Samples: 346344. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:10:53,931][01130] Avg episode reward: [(0, '8.724')] |
|
[2025-02-20 04:10:53,937][03356] Saving new best policy, reward=8.724! |
|
[2025-02-20 04:10:55,733][03373] Updated weights for policy 0, policy_version 340 (0.0016) |
|
[2025-02-20 04:10:58,927][01130] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 1404928. Throughput: 0: 952.2. Samples: 349532. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:10:58,931][01130] Avg episode reward: [(0, '9.281')] |
|
[2025-02-20 04:10:58,932][03356] Saving new best policy, reward=9.281! |
|
[2025-02-20 04:11:03,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 1421312. Throughput: 0: 920.0. Samples: 354424. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:11:03,929][01130] Avg episode reward: [(0, '8.810')] |
|
[2025-02-20 04:11:06,505][03373] Updated weights for policy 0, policy_version 350 (0.0021) |
|
[2025-02-20 04:11:08,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 1441792. Throughput: 0: 945.5. Samples: 360758. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:11:08,929][01130] Avg episode reward: [(0, '9.905')] |
|
[2025-02-20 04:11:08,932][03356] Saving new best policy, reward=9.905! |
|
[2025-02-20 04:11:13,928][01130] Fps is (10 sec: 4095.8, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 1462272. Throughput: 0: 944.8. Samples: 364030. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:11:13,931][01130] Avg episode reward: [(0, '10.645')] |
|
[2025-02-20 04:11:13,940][03356] Saving new best policy, reward=10.645! |
|
[2025-02-20 04:11:17,864][03373] Updated weights for policy 0, policy_version 360 (0.0013) |
|
[2025-02-20 04:11:18,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3790.6). Total num frames: 1478656. Throughput: 0: 928.7. Samples: 368862. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:11:18,929][01130] Avg episode reward: [(0, '11.214')] |
|
[2025-02-20 04:11:18,931][03356] Saving new best policy, reward=11.214! |
|
[2025-02-20 04:11:23,927][01130] Fps is (10 sec: 3686.6, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 1499136. Throughput: 0: 956.4. Samples: 375284. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:11:23,928][01130] Avg episode reward: [(0, '12.405')] |
|
[2025-02-20 04:11:23,939][03356] Saving new best policy, reward=12.405! |
|
[2025-02-20 04:11:27,397][03373] Updated weights for policy 0, policy_version 370 (0.0017) |
|
[2025-02-20 04:11:28,934][01130] Fps is (10 sec: 3684.1, 60 sec: 3686.0, 300 sec: 3776.6). Total num frames: 1515520. Throughput: 0: 958.3. Samples: 378536. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:11:28,935][01130] Avg episode reward: [(0, '11.868')] |
|
[2025-02-20 04:11:33,928][01130] Fps is (10 sec: 3686.1, 60 sec: 3823.2, 300 sec: 3790.5). Total num frames: 1536000. Throughput: 0: 953.3. Samples: 383520. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:11:33,935][01130] Avg episode reward: [(0, '11.348')] |
|
[2025-02-20 04:11:38,082][03373] Updated weights for policy 0, policy_version 380 (0.0015) |
|
[2025-02-20 04:11:38,927][01130] Fps is (10 sec: 4508.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 1560576. Throughput: 0: 971.2. Samples: 390048. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:11:38,933][01130] Avg episode reward: [(0, '11.439')] |
|
[2025-02-20 04:11:43,927][01130] Fps is (10 sec: 3686.7, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 1572864. Throughput: 0: 962.2. Samples: 392832. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:11:43,929][01130] Avg episode reward: [(0, '10.960')] |
|
[2025-02-20 04:11:48,871][03373] Updated weights for policy 0, policy_version 390 (0.0013) |
|
[2025-02-20 04:11:48,927][01130] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 1597440. Throughput: 0: 976.7. Samples: 398374. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:11:48,928][01130] Avg episode reward: [(0, '12.193')] |
|
[2025-02-20 04:11:53,927][01130] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 1617920. Throughput: 0: 981.2. Samples: 404914. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:11:53,929][01130] Avg episode reward: [(0, '13.256')] |
|
[2025-02-20 04:11:53,935][03356] Saving new best policy, reward=13.256! |
|
[2025-02-20 04:11:58,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3790.6). Total num frames: 1634304. Throughput: 0: 956.3. Samples: 407062. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:11:58,928][01130] Avg episode reward: [(0, '13.970')] |
|
[2025-02-20 04:11:58,935][03356] Saving new best policy, reward=13.970! |
|
[2025-02-20 04:11:59,623][03373] Updated weights for policy 0, policy_version 400 (0.0014) |
|
[2025-02-20 04:12:03,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 1654784. Throughput: 0: 982.2. Samples: 413062. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:12:03,935][01130] Avg episode reward: [(0, '14.125')] |
|
[2025-02-20 04:12:03,951][03356] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000404_1654784.pth... |
|
[2025-02-20 04:12:04,066][03356] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000179_733184.pth |
|
[2025-02-20 04:12:04,078][03356] Saving new best policy, reward=14.125! |
|
[2025-02-20 04:12:08,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 1675264. Throughput: 0: 975.2. Samples: 419170. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:12:08,932][01130] Avg episode reward: [(0, '13.841')] |
|
[2025-02-20 04:12:10,148][03373] Updated weights for policy 0, policy_version 410 (0.0012) |
|
[2025-02-20 04:12:13,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3804.4). Total num frames: 1691648. Throughput: 0: 949.1. Samples: 421240. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:12:13,933][01130] Avg episode reward: [(0, '13.744')] |
|
[2025-02-20 04:12:18,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 1712128. Throughput: 0: 981.0. Samples: 427666. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:12:18,931][01130] Avg episode reward: [(0, '14.395')] |
|
[2025-02-20 04:12:18,935][03356] Saving new best policy, reward=14.395! |
|
[2025-02-20 04:12:20,180][03373] Updated weights for policy 0, policy_version 420 (0.0027) |
|
[2025-02-20 04:12:23,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 1732608. Throughput: 0: 959.2. Samples: 433214. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:12:23,934][01130] Avg episode reward: [(0, '15.037')] |
|
[2025-02-20 04:12:23,940][03356] Saving new best policy, reward=15.037! |
|
[2025-02-20 04:12:28,928][01130] Fps is (10 sec: 3686.3, 60 sec: 3891.6, 300 sec: 3804.5). Total num frames: 1748992. Throughput: 0: 954.1. Samples: 435768. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:12:28,930][01130] Avg episode reward: [(0, '14.446')] |
|
[2025-02-20 04:12:31,043][03373] Updated weights for policy 0, policy_version 430 (0.0013) |
|
[2025-02-20 04:12:33,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 1773568. Throughput: 0: 977.3. Samples: 442354. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:12:33,929][01130] Avg episode reward: [(0, '14.310')] |
|
[2025-02-20 04:12:38,927][01130] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 1785856. Throughput: 0: 945.0. Samples: 447440. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:12:38,929][01130] Avg episode reward: [(0, '13.274')] |
|
[2025-02-20 04:12:41,911][03373] Updated weights for policy 0, policy_version 440 (0.0018) |
|
[2025-02-20 04:12:43,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 1810432. Throughput: 0: 966.0. Samples: 450532. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:12:43,929][01130] Avg episode reward: [(0, '13.839')] |
|
[2025-02-20 04:12:48,930][01130] Fps is (10 sec: 4504.2, 60 sec: 3891.0, 300 sec: 3846.0). Total num frames: 1830912. Throughput: 0: 976.9. Samples: 457026. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:12:48,935][01130] Avg episode reward: [(0, '14.596')] |
|
[2025-02-20 04:12:52,653][03373] Updated weights for policy 0, policy_version 450 (0.0020) |
|
[2025-02-20 04:12:53,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 1847296. Throughput: 0: 954.1. Samples: 462106. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:12:53,931][01130] Avg episode reward: [(0, '14.898')] |
|
[2025-02-20 04:12:58,927][01130] Fps is (10 sec: 3687.5, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1867776. Throughput: 0: 979.6. Samples: 465322. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:12:58,930][01130] Avg episode reward: [(0, '15.733')] |
|
[2025-02-20 04:12:58,934][03356] Saving new best policy, reward=15.733! |
|
[2025-02-20 04:13:02,080][03373] Updated weights for policy 0, policy_version 460 (0.0017) |
|
[2025-02-20 04:13:03,929][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1888256. Throughput: 0: 981.4. Samples: 471830. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:13:03,931][01130] Avg episode reward: [(0, '15.402')] |
|
[2025-02-20 04:13:08,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 1904640. Throughput: 0: 969.8. Samples: 476854. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:13:08,933][01130] Avg episode reward: [(0, '14.296')] |
|
[2025-02-20 04:13:12,855][03373] Updated weights for policy 0, policy_version 470 (0.0012) |
|
[2025-02-20 04:13:13,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 1929216. Throughput: 0: 986.7. Samples: 480168. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:13:13,933][01130] Avg episode reward: [(0, '14.271')] |
|
[2025-02-20 04:13:18,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1945600. Throughput: 0: 973.5. Samples: 486160. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:13:18,933][01130] Avg episode reward: [(0, '15.142')] |
|
[2025-02-20 04:13:23,791][03373] Updated weights for policy 0, policy_version 480 (0.0015) |
|
[2025-02-20 04:13:23,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1966080. Throughput: 0: 983.6. Samples: 491704. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:13:23,933][01130] Avg episode reward: [(0, '15.726')] |
|
[2025-02-20 04:13:28,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 1986560. Throughput: 0: 987.0. Samples: 494946. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:13:28,933][01130] Avg episode reward: [(0, '16.756')] |
|
[2025-02-20 04:13:28,937][03356] Saving new best policy, reward=16.756! |
|
[2025-02-20 04:13:33,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 2002944. Throughput: 0: 968.0. Samples: 500582. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:13:33,929][01130] Avg episode reward: [(0, '17.367')] |
|
[2025-02-20 04:13:33,940][03356] Saving new best policy, reward=17.367! |
|
[2025-02-20 04:13:34,566][03373] Updated weights for policy 0, policy_version 490 (0.0013) |
|
[2025-02-20 04:13:38,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 2023424. Throughput: 0: 985.7. Samples: 506462. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:13:38,928][01130] Avg episode reward: [(0, '17.657')] |
|
[2025-02-20 04:13:38,933][03356] Saving new best policy, reward=17.657! |
|
[2025-02-20 04:13:43,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2043904. Throughput: 0: 987.1. Samples: 509742. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:13:43,929][01130] Avg episode reward: [(0, '17.264')] |
|
[2025-02-20 04:13:44,005][03373] Updated weights for policy 0, policy_version 500 (0.0014) |
|
[2025-02-20 04:13:48,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3832.2). Total num frames: 2060288. Throughput: 0: 956.7. Samples: 514882. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:13:48,929][01130] Avg episode reward: [(0, '17.775')] |
|
[2025-02-20 04:13:48,933][03356] Saving new best policy, reward=17.775! |
|
[2025-02-20 04:13:53,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2084864. Throughput: 0: 987.5. Samples: 521290. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:13:53,929][01130] Avg episode reward: [(0, '17.745')] |
|
[2025-02-20 04:13:54,815][03373] Updated weights for policy 0, policy_version 510 (0.0013) |
|
[2025-02-20 04:13:58,928][01130] Fps is (10 sec: 4095.6, 60 sec: 3891.1, 300 sec: 3846.1). Total num frames: 2101248. Throughput: 0: 985.8. Samples: 524528. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:13:58,933][01130] Avg episode reward: [(0, '19.033')] |
|
[2025-02-20 04:13:58,955][03356] Saving new best policy, reward=19.033! |
|
[2025-02-20 04:14:03,927][01130] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 2117632. Throughput: 0: 959.7. Samples: 529346. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:14:03,931][01130] Avg episode reward: [(0, '18.247')] |
|
[2025-02-20 04:14:03,938][03356] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000518_2121728.pth... |
|
[2025-02-20 04:14:04,059][03356] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000292_1196032.pth |
|
[2025-02-20 04:14:05,821][03373] Updated weights for policy 0, policy_version 520 (0.0025) |
|
[2025-02-20 04:14:08,927][01130] Fps is (10 sec: 4096.3, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2142208. Throughput: 0: 980.9. Samples: 535846. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:14:08,930][01130] Avg episode reward: [(0, '18.907')] |
|
[2025-02-20 04:14:13,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 2158592. Throughput: 0: 982.0. Samples: 539134. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:14:13,932][01130] Avg episode reward: [(0, '19.334')] |
|
[2025-02-20 04:14:13,939][03356] Saving new best policy, reward=19.334! |
|
[2025-02-20 04:14:16,778][03373] Updated weights for policy 0, policy_version 530 (0.0019) |
|
[2025-02-20 04:14:18,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2179072. Throughput: 0: 966.5. Samples: 544074. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:14:18,929][01130] Avg episode reward: [(0, '18.880')] |
|
[2025-02-20 04:14:23,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2199552. Throughput: 0: 979.8. Samples: 550552. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:14:23,929][01130] Avg episode reward: [(0, '18.612')] |
|
[2025-02-20 04:14:26,552][03373] Updated weights for policy 0, policy_version 540 (0.0012) |
|
[2025-02-20 04:14:28,928][01130] Fps is (10 sec: 3686.2, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 2215936. Throughput: 0: 968.6. Samples: 553328. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:14:28,929][01130] Avg episode reward: [(0, '20.743')] |
|
[2025-02-20 04:14:28,931][03356] Saving new best policy, reward=20.743! |
|
[2025-02-20 04:14:33,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2236416. Throughput: 0: 975.1. Samples: 558760. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:14:33,929][01130] Avg episode reward: [(0, '20.383')] |
|
[2025-02-20 04:14:36,978][03373] Updated weights for policy 0, policy_version 550 (0.0016) |
|
[2025-02-20 04:14:38,927][01130] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2260992. Throughput: 0: 978.0. Samples: 565302. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:14:38,929][01130] Avg episode reward: [(0, '20.334')] |
|
[2025-02-20 04:14:43,928][01130] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 2273280. Throughput: 0: 956.9. Samples: 567590. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:14:43,929][01130] Avg episode reward: [(0, '19.496')] |
|
[2025-02-20 04:14:47,679][03373] Updated weights for policy 0, policy_version 560 (0.0016) |
|
[2025-02-20 04:14:48,927][01130] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2297856. Throughput: 0: 985.2. Samples: 573682. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:14:48,929][01130] Avg episode reward: [(0, '17.260')] |
|
[2025-02-20 04:14:53,928][01130] Fps is (10 sec: 4505.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2318336. Throughput: 0: 980.4. Samples: 579964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-20 04:14:53,929][01130] Avg episode reward: [(0, '16.838')] |
|
[2025-02-20 04:14:58,419][03373] Updated weights for policy 0, policy_version 570 (0.0020) |
|
[2025-02-20 04:14:58,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3860.0). Total num frames: 2334720. Throughput: 0: 952.1. Samples: 581980. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:14:58,929][01130] Avg episode reward: [(0, '17.258')] |
|
[2025-02-20 04:15:03,927][01130] Fps is (10 sec: 3686.7, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2355200. Throughput: 0: 987.6. Samples: 588518. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:15:03,929][01130] Avg episode reward: [(0, '18.621')] |
|
[2025-02-20 04:15:08,525][03373] Updated weights for policy 0, policy_version 580 (0.0015) |
|
[2025-02-20 04:15:08,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2375680. Throughput: 0: 970.6. Samples: 594228. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:15:08,931][01130] Avg episode reward: [(0, '19.505')] |
|
[2025-02-20 04:15:13,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2396160. Throughput: 0: 967.2. Samples: 596852. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:15:13,932][01130] Avg episode reward: [(0, '20.330')] |
|
[2025-02-20 04:15:18,569][03373] Updated weights for policy 0, policy_version 590 (0.0017) |
|
[2025-02-20 04:15:18,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2416640. Throughput: 0: 991.9. Samples: 603396. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:15:18,929][01130] Avg episode reward: [(0, '22.358')] |
|
[2025-02-20 04:15:18,934][03356] Saving new best policy, reward=22.358! |
|
[2025-02-20 04:15:23,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2433024. Throughput: 0: 959.5. Samples: 608478. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:15:23,929][01130] Avg episode reward: [(0, '22.459')] |
|
[2025-02-20 04:15:23,939][03356] Saving new best policy, reward=22.459! |
|
[2025-02-20 04:15:28,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.8). Total num frames: 2453504. Throughput: 0: 974.4. Samples: 611438. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:15:28,929][01130] Avg episode reward: [(0, '21.815')] |
|
[2025-02-20 04:15:29,733][03373] Updated weights for policy 0, policy_version 600 (0.0019) |
|
[2025-02-20 04:15:33,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2473984. Throughput: 0: 979.6. Samples: 617766. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:15:33,929][01130] Avg episode reward: [(0, '21.461')] |
|
[2025-02-20 04:15:38,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 2490368. Throughput: 0: 948.9. Samples: 622662. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:15:38,929][01130] Avg episode reward: [(0, '20.987')] |
|
[2025-02-20 04:15:40,669][03373] Updated weights for policy 0, policy_version 610 (0.0013) |
|
[2025-02-20 04:15:43,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2510848. Throughput: 0: 975.4. Samples: 625874. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:15:43,930][01130] Avg episode reward: [(0, '21.496')] |
|
[2025-02-20 04:15:48,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2531328. Throughput: 0: 973.2. Samples: 632310. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:15:48,930][01130] Avg episode reward: [(0, '20.942')] |
|
[2025-02-20 04:15:51,682][03373] Updated weights for policy 0, policy_version 620 (0.0019) |
|
[2025-02-20 04:15:53,928][01130] Fps is (10 sec: 3686.3, 60 sec: 3823.0, 300 sec: 3873.8). Total num frames: 2547712. Throughput: 0: 957.5. Samples: 637318. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:15:53,929][01130] Avg episode reward: [(0, '20.973')] |
|
[2025-02-20 04:15:58,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2568192. Throughput: 0: 971.7. Samples: 640578. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:15:58,929][01130] Avg episode reward: [(0, '21.514')] |
|
[2025-02-20 04:16:01,056][03373] Updated weights for policy 0, policy_version 630 (0.0017) |
|
[2025-02-20 04:16:03,930][01130] Fps is (10 sec: 4094.8, 60 sec: 3891.0, 300 sec: 3887.7). Total num frames: 2588672. Throughput: 0: 964.4. Samples: 646798. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:16:03,936][01130] Avg episode reward: [(0, '21.956')] |
|
[2025-02-20 04:16:03,948][03356] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000632_2588672.pth... |
|
[2025-02-20 04:16:04,104][03356] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000404_1654784.pth |
|
[2025-02-20 04:16:08,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3873.9). Total num frames: 2605056. Throughput: 0: 971.1. Samples: 652176. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:16:08,933][01130] Avg episode reward: [(0, '21.977')] |
|
[2025-02-20 04:16:11,857][03373] Updated weights for policy 0, policy_version 640 (0.0018) |
|
[2025-02-20 04:16:13,927][01130] Fps is (10 sec: 4097.3, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2629632. Throughput: 0: 978.0. Samples: 655446. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:16:13,932][01130] Avg episode reward: [(0, '21.141')] |
|
[2025-02-20 04:16:18,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 2646016. Throughput: 0: 962.2. Samples: 661064. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:16:18,929][01130] Avg episode reward: [(0, '21.558')] |
|
[2025-02-20 04:16:22,691][03373] Updated weights for policy 0, policy_version 650 (0.0013) |
|
[2025-02-20 04:16:23,929][01130] Fps is (10 sec: 3685.7, 60 sec: 3891.1, 300 sec: 3901.7). Total num frames: 2666496. Throughput: 0: 984.4. Samples: 666964. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:16:23,934][01130] Avg episode reward: [(0, '20.847')] |
|
[2025-02-20 04:16:28,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2686976. Throughput: 0: 987.8. Samples: 670326. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:16:28,932][01130] Avg episode reward: [(0, '20.583')] |
|
[2025-02-20 04:16:33,328][03373] Updated weights for policy 0, policy_version 660 (0.0019) |
|
[2025-02-20 04:16:33,927][01130] Fps is (10 sec: 3687.1, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 2703360. Throughput: 0: 958.2. Samples: 675428. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:16:33,929][01130] Avg episode reward: [(0, '19.411')] |
|
[2025-02-20 04:16:38,930][01130] Fps is (10 sec: 4094.9, 60 sec: 3959.3, 300 sec: 3915.5). Total num frames: 2727936. Throughput: 0: 993.5. Samples: 682030. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:16:38,932][01130] Avg episode reward: [(0, '19.311')] |
|
[2025-02-20 04:16:42,565][03373] Updated weights for policy 0, policy_version 670 (0.0013) |
|
[2025-02-20 04:16:43,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2744320. Throughput: 0: 993.5. Samples: 685284. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:16:43,933][01130] Avg episode reward: [(0, '18.878')] |
|
[2025-02-20 04:16:48,927][01130] Fps is (10 sec: 3687.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2764800. Throughput: 0: 966.2. Samples: 690272. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:16:48,929][01130] Avg episode reward: [(0, '19.823')] |
|
[2025-02-20 04:16:53,274][03373] Updated weights for policy 0, policy_version 680 (0.0017) |
|
[2025-02-20 04:16:53,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2785280. Throughput: 0: 992.4. Samples: 696832. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:16:53,929][01130] Avg episode reward: [(0, '19.596')] |
|
[2025-02-20 04:16:58,928][01130] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2801664. Throughput: 0: 989.5. Samples: 699974. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:16:58,932][01130] Avg episode reward: [(0, '21.914')] |
|
[2025-02-20 04:17:03,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.4, 300 sec: 3887.7). Total num frames: 2822144. Throughput: 0: 978.7. Samples: 705106. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:17:03,929][01130] Avg episode reward: [(0, '23.451')] |
|
[2025-02-20 04:17:03,935][03356] Saving new best policy, reward=23.451! |
|
[2025-02-20 04:17:04,313][03373] Updated weights for policy 0, policy_version 690 (0.0029) |
|
[2025-02-20 04:17:08,927][01130] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2842624. Throughput: 0: 991.3. Samples: 711570. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:17:08,929][01130] Avg episode reward: [(0, '24.654')] |
|
[2025-02-20 04:17:08,977][03356] Saving new best policy, reward=24.654! |
|
[2025-02-20 04:17:13,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 2859008. Throughput: 0: 975.6. Samples: 714230. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-20 04:17:13,930][01130] Avg episode reward: [(0, '24.098')] |
|
[2025-02-20 04:17:15,096][03373] Updated weights for policy 0, policy_version 700 (0.0017) |
|
[2025-02-20 04:17:18,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2883584. Throughput: 0: 985.4. Samples: 719772. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:17:18,929][01130] Avg episode reward: [(0, '22.794')] |
|
[2025-02-20 04:17:23,927][01130] Fps is (10 sec: 4505.6, 60 sec: 3959.6, 300 sec: 3915.5). Total num frames: 2904064. Throughput: 0: 982.4. Samples: 726234. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:17:23,931][01130] Avg episode reward: [(0, '21.140')] |
|
[2025-02-20 04:17:24,997][03373] Updated weights for policy 0, policy_version 710 (0.0020) |
|
[2025-02-20 04:17:28,928][01130] Fps is (10 sec: 3686.1, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2920448. Throughput: 0: 957.2. Samples: 728360. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:17:28,931][01130] Avg episode reward: [(0, '20.110')] |
|
[2025-02-20 04:17:33,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 2940928. Throughput: 0: 980.2. Samples: 734382. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-20 04:17:33,933][01130] Avg episode reward: [(0, '20.421')] |
|
[2025-02-20 04:17:35,579][03373] Updated weights for policy 0, policy_version 720 (0.0014) |
|
[2025-02-20 04:17:38,927][01130] Fps is (10 sec: 4096.3, 60 sec: 3891.4, 300 sec: 3901.6). Total num frames: 2961408. Throughput: 0: 968.9. Samples: 740432. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:17:38,931][01130] Avg episode reward: [(0, '19.703')] |
|
[2025-02-20 04:17:43,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.8). Total num frames: 2977792. Throughput: 0: 946.7. Samples: 742576. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:17:43,929][01130] Avg episode reward: [(0, '20.220')] |
|
[2025-02-20 04:17:46,469][03373] Updated weights for policy 0, policy_version 730 (0.0042) |
|
[2025-02-20 04:17:48,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2998272. Throughput: 0: 976.4. Samples: 749046. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:17:48,930][01130] Avg episode reward: [(0, '20.898')] |
|
[2025-02-20 04:17:53,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 3014656. Throughput: 0: 956.4. Samples: 754610. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:17:53,929][01130] Avg episode reward: [(0, '21.083')] |
|
[2025-02-20 04:17:57,534][03373] Updated weights for policy 0, policy_version 740 (0.0012) |
|
[2025-02-20 04:17:58,929][01130] Fps is (10 sec: 3685.9, 60 sec: 3891.1, 300 sec: 3887.7). Total num frames: 3035136. Throughput: 0: 954.3. Samples: 757174. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-20 04:17:58,936][01130] Avg episode reward: [(0, '19.869')] |
|
[2025-02-20 04:18:03,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3055616. Throughput: 0: 975.3. Samples: 763660. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:18:03,931][01130] Avg episode reward: [(0, '20.205')] |
|
[2025-02-20 04:18:03,941][03356] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000746_3055616.pth... |
|
[2025-02-20 04:18:04,073][03356] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000518_2121728.pth |
|
[2025-02-20 04:18:08,077][03373] Updated weights for policy 0, policy_version 750 (0.0012) |
|
[2025-02-20 04:18:08,927][01130] Fps is (10 sec: 3686.9, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3072000. Throughput: 0: 940.6. Samples: 768560. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:18:08,931][01130] Avg episode reward: [(0, '20.840')] |
|
[2025-02-20 04:18:13,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3092480. Throughput: 0: 964.9. Samples: 771780. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:18:13,933][01130] Avg episode reward: [(0, '20.329')] |
|
[2025-02-20 04:18:17,860][03373] Updated weights for policy 0, policy_version 760 (0.0013) |
|
[2025-02-20 04:18:18,927][01130] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3117056. Throughput: 0: 976.8. Samples: 778336. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-20 04:18:18,933][01130] Avg episode reward: [(0, '20.588')] |
|
[2025-02-20 04:18:23,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 3129344. Throughput: 0: 953.7. Samples: 783350. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:18:23,935][01130] Avg episode reward: [(0, '20.828')] |
|
[2025-02-20 04:18:28,644][03373] Updated weights for policy 0, policy_version 770 (0.0022) |
|
[2025-02-20 04:18:28,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3153920. Throughput: 0: 976.3. Samples: 786510. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:18:28,929][01130] Avg episode reward: [(0, '20.943')] |
|
[2025-02-20 04:18:33,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 3170304. Throughput: 0: 975.0. Samples: 792922. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:18:33,932][01130] Avg episode reward: [(0, '21.200')] |
|
[2025-02-20 04:18:38,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 3190784. Throughput: 0: 964.0. Samples: 797990. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:18:38,933][01130] Avg episode reward: [(0, '22.168')] |
|
[2025-02-20 04:18:39,642][03373] Updated weights for policy 0, policy_version 780 (0.0034) |
|
[2025-02-20 04:18:43,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3211264. Throughput: 0: 979.4. Samples: 801246. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:18:43,930][01130] Avg episode reward: [(0, '21.901')] |
|
[2025-02-20 04:18:48,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3227648. Throughput: 0: 967.3. Samples: 807188. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:18:48,928][01130] Avg episode reward: [(0, '22.438')] |
|
[2025-02-20 04:18:50,345][03373] Updated weights for policy 0, policy_version 790 (0.0014) |
|
[2025-02-20 04:18:53,928][01130] Fps is (10 sec: 3686.2, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3248128. Throughput: 0: 984.9. Samples: 812882. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:18:53,935][01130] Avg episode reward: [(0, '22.626')] |
|
[2025-02-20 04:18:58,927][01130] Fps is (10 sec: 4505.6, 60 sec: 3959.6, 300 sec: 3915.5). Total num frames: 3272704. Throughput: 0: 982.6. Samples: 815998. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:18:58,933][01130] Avg episode reward: [(0, '23.621')] |
|
[2025-02-20 04:18:59,952][03373] Updated weights for policy 0, policy_version 800 (0.0012) |
|
[2025-02-20 04:19:03,927][01130] Fps is (10 sec: 3686.6, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3284992. Throughput: 0: 954.7. Samples: 821298. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:19:03,933][01130] Avg episode reward: [(0, '22.105')] |
|
[2025-02-20 04:19:08,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 3309568. Throughput: 0: 981.9. Samples: 827536. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:19:08,934][01130] Avg episode reward: [(0, '22.975')] |
|
[2025-02-20 04:19:10,616][03373] Updated weights for policy 0, policy_version 810 (0.0014) |
|
[2025-02-20 04:19:13,927][01130] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 3330048. Throughput: 0: 983.7. Samples: 830776. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:19:13,929][01130] Avg episode reward: [(0, '23.835')] |
|
[2025-02-20 04:19:18,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 3346432. Throughput: 0: 951.6. Samples: 835744. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:19:18,928][01130] Avg episode reward: [(0, '24.717')] |
|
[2025-02-20 04:19:18,930][03356] Saving new best policy, reward=24.717! |
|
[2025-02-20 04:19:21,636][03373] Updated weights for policy 0, policy_version 820 (0.0022) |
|
[2025-02-20 04:19:23,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 3366912. Throughput: 0: 980.9. Samples: 842132. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:19:23,936][01130] Avg episode reward: [(0, '24.712')] |
|
[2025-02-20 04:19:28,927][01130] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 3383296. Throughput: 0: 978.1. Samples: 845260. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:19:28,932][01130] Avg episode reward: [(0, '25.455')] |
|
[2025-02-20 04:19:28,936][03356] Saving new best policy, reward=25.455! |
|
[2025-02-20 04:19:32,600][03373] Updated weights for policy 0, policy_version 830 (0.0015) |
|
[2025-02-20 04:19:33,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 3403776. Throughput: 0: 956.5. Samples: 850230. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:19:33,933][01130] Avg episode reward: [(0, '24.427')] |
|
[2025-02-20 04:19:38,927][01130] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3424256. Throughput: 0: 971.9. Samples: 856616. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:19:38,929][01130] Avg episode reward: [(0, '22.872')] |
|
[2025-02-20 04:19:42,958][03373] Updated weights for policy 0, policy_version 840 (0.0013) |
|
[2025-02-20 04:19:43,931][01130] Fps is (10 sec: 3685.0, 60 sec: 3822.7, 300 sec: 3873.8). Total num frames: 3440640. Throughput: 0: 968.1. Samples: 859564. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:19:43,937][01130] Avg episode reward: [(0, '23.861')] |
|
[2025-02-20 04:19:48,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 3461120. Throughput: 0: 964.8. Samples: 864716. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:19:48,934][01130] Avg episode reward: [(0, '24.887')] |
|
[2025-02-20 04:19:53,034][03373] Updated weights for policy 0, policy_version 850 (0.0016) |
|
[2025-02-20 04:19:53,930][01130] Fps is (10 sec: 4505.9, 60 sec: 3959.3, 300 sec: 3901.6). Total num frames: 3485696. Throughput: 0: 972.1. Samples: 871284. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:19:53,932][01130] Avg episode reward: [(0, '23.015')] |
|
[2025-02-20 04:19:58,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 3497984. Throughput: 0: 953.2. Samples: 873672. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:19:58,929][01130] Avg episode reward: [(0, '23.645')] |
|
[2025-02-20 04:20:03,927][01130] Fps is (10 sec: 3277.8, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 3518464. Throughput: 0: 968.0. Samples: 879304. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:20:03,929][01130] Avg episode reward: [(0, '23.199')] |
|
[2025-02-20 04:20:03,940][03356] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000859_3518464.pth... |
|
[2025-02-20 04:20:04,033][03356] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000632_2588672.pth |
|
[2025-02-20 04:20:04,236][03373] Updated weights for policy 0, policy_version 860 (0.0013) |
|
[2025-02-20 04:20:08,929][01130] Fps is (10 sec: 4095.3, 60 sec: 3822.8, 300 sec: 3873.8). Total num frames: 3538944. Throughput: 0: 967.3. Samples: 885660. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:20:08,930][01130] Avg episode reward: [(0, '22.264')] |
|
[2025-02-20 04:20:13,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 3555328. Throughput: 0: 942.8. Samples: 887686. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:20:13,929][01130] Avg episode reward: [(0, '21.792')] |
|
[2025-02-20 04:20:15,113][03373] Updated weights for policy 0, policy_version 870 (0.0013) |
|
[2025-02-20 04:20:18,927][01130] Fps is (10 sec: 3687.1, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3575808. Throughput: 0: 969.5. Samples: 893858. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:20:18,929][01130] Avg episode reward: [(0, '22.489')] |
|
[2025-02-20 04:20:23,928][01130] Fps is (10 sec: 4095.7, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3596288. Throughput: 0: 957.8. Samples: 899718. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:20:23,934][01130] Avg episode reward: [(0, '22.131')] |
|
[2025-02-20 04:20:26,116][03373] Updated weights for policy 0, policy_version 880 (0.0016) |
|
[2025-02-20 04:20:28,928][01130] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3612672. Throughput: 0: 940.2. Samples: 901870. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:20:28,929][01130] Avg episode reward: [(0, '22.627')] |
|
[2025-02-20 04:20:33,927][01130] Fps is (10 sec: 3686.6, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3633152. Throughput: 0: 960.5. Samples: 907938. Policy #0 lag: (min: 0.0, avg: 0.2, max: 2.0) |
|
[2025-02-20 04:20:33,932][01130] Avg episode reward: [(0, '23.151')] |
|
[2025-02-20 04:20:36,097][03373] Updated weights for policy 0, policy_version 890 (0.0017) |
|
[2025-02-20 04:20:38,927][01130] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 3649536. Throughput: 0: 930.5. Samples: 913152. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-20 04:20:38,930][01130] Avg episode reward: [(0, '22.131')] |
|
[2025-02-20 04:20:43,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3823.2, 300 sec: 3860.0). Total num frames: 3670016. Throughput: 0: 936.0. Samples: 915790. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:20:43,930][01130] Avg episode reward: [(0, '22.767')] |
|
[2025-02-20 04:20:47,495][03373] Updated weights for policy 0, policy_version 900 (0.0016) |
|
[2025-02-20 04:20:48,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3690496. Throughput: 0: 949.4. Samples: 922026. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:20:48,929][01130] Avg episode reward: [(0, '23.000')] |
|
[2025-02-20 04:20:53,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3686.6, 300 sec: 3860.0). Total num frames: 3706880. Throughput: 0: 916.8. Samples: 926914. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:20:53,937][01130] Avg episode reward: [(0, '23.133')] |
|
[2025-02-20 04:20:58,770][03373] Updated weights for policy 0, policy_version 910 (0.0013) |
|
[2025-02-20 04:20:58,928][01130] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3727360. Throughput: 0: 940.7. Samples: 930018. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:20:58,929][01130] Avg episode reward: [(0, '21.878')] |
|
[2025-02-20 04:21:03,929][01130] Fps is (10 sec: 4095.3, 60 sec: 3822.8, 300 sec: 3873.8). Total num frames: 3747840. Throughput: 0: 945.0. Samples: 936386. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:21:03,936][01130] Avg episode reward: [(0, '22.685')] |
|
[2025-02-20 04:21:08,927][01130] Fps is (10 sec: 3686.5, 60 sec: 3754.8, 300 sec: 3846.1). Total num frames: 3764224. Throughput: 0: 923.7. Samples: 941284. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:21:08,933][01130] Avg episode reward: [(0, '23.077')] |
|
[2025-02-20 04:21:09,632][03373] Updated weights for policy 0, policy_version 920 (0.0013) |
|
[2025-02-20 04:21:13,927][01130] Fps is (10 sec: 3687.0, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3784704. Throughput: 0: 947.7. Samples: 944518. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:21:13,933][01130] Avg episode reward: [(0, '24.234')] |
|
[2025-02-20 04:21:18,931][01130] Fps is (10 sec: 4094.6, 60 sec: 3822.7, 300 sec: 3859.9). Total num frames: 3805184. Throughput: 0: 954.4. Samples: 950888. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:21:18,932][01130] Avg episode reward: [(0, '26.276')] |
|
[2025-02-20 04:21:18,933][03356] Saving new best policy, reward=26.276! |
|
[2025-02-20 04:21:20,329][03373] Updated weights for policy 0, policy_version 930 (0.0013) |
|
[2025-02-20 04:21:23,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 3821568. Throughput: 0: 951.2. Samples: 955958. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:21:23,933][01130] Avg episode reward: [(0, '25.935')] |
|
[2025-02-20 04:21:28,927][01130] Fps is (10 sec: 3687.7, 60 sec: 3823.0, 300 sec: 3860.0). Total num frames: 3842048. Throughput: 0: 964.6. Samples: 959198. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:21:28,929][01130] Avg episode reward: [(0, '26.591')] |
|
[2025-02-20 04:21:28,933][03356] Saving new best policy, reward=26.591! |
|
[2025-02-20 04:21:30,263][03373] Updated weights for policy 0, policy_version 940 (0.0013) |
|
[2025-02-20 04:21:33,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 3858432. Throughput: 0: 952.0. Samples: 964864. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:21:33,929][01130] Avg episode reward: [(0, '25.236')] |
|
[2025-02-20 04:21:38,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3878912. Throughput: 0: 966.3. Samples: 970398. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:21:38,931][01130] Avg episode reward: [(0, '23.331')] |
|
[2025-02-20 04:21:41,205][03373] Updated weights for policy 0, policy_version 950 (0.0013) |
|
[2025-02-20 04:21:43,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3899392. Throughput: 0: 968.8. Samples: 973612. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:21:43,929][01130] Avg episode reward: [(0, '22.927')] |
|
[2025-02-20 04:21:48,927][01130] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 3915776. Throughput: 0: 947.1. Samples: 979002. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:21:48,929][01130] Avg episode reward: [(0, '22.985')] |
|
[2025-02-20 04:21:51,916][03373] Updated weights for policy 0, policy_version 960 (0.0012) |
|
[2025-02-20 04:21:53,927][01130] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3940352. Throughput: 0: 973.7. Samples: 985100. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-20 04:21:53,928][01130] Avg episode reward: [(0, '23.045')] |
|
[2025-02-20 04:21:58,928][01130] Fps is (10 sec: 4505.2, 60 sec: 3891.2, 300 sec: 3859.9). Total num frames: 3960832. Throughput: 0: 973.9. Samples: 988346. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-20 04:21:58,933][01130] Avg episode reward: [(0, '22.821')] |
|
[2025-02-20 04:22:02,962][03373] Updated weights for policy 0, policy_version 970 (0.0015) |
|
[2025-02-20 04:22:03,927][01130] Fps is (10 sec: 3276.8, 60 sec: 3754.8, 300 sec: 3832.2). Total num frames: 3973120. Throughput: 0: 938.4. Samples: 993112. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-20 04:22:03,937][01130] Avg episode reward: [(0, '24.471')] |
|
[2025-02-20 04:22:03,998][03356] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000971_3977216.pth... |
|
[2025-02-20 04:22:04,100][03356] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000746_3055616.pth |
|
[2025-02-20 04:22:08,927][01130] Fps is (10 sec: 3686.7, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3997696. Throughput: 0: 964.3. Samples: 999352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-20 04:22:08,928][01130] Avg episode reward: [(0, '23.896')] |
|
[2025-02-20 04:22:10,953][03356] Stopping Batcher_0... |
|
[2025-02-20 04:22:10,954][03356] Loop batcher_evt_loop terminating... |
|
[2025-02-20 04:22:10,955][03356] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-02-20 04:22:10,954][01130] Component Batcher_0 stopped! |
|
[2025-02-20 04:22:10,957][01130] Component RolloutWorker_w2 process died already! Don't wait for it. |
|
[2025-02-20 04:22:10,958][01130] Component RolloutWorker_w6 process died already! Don't wait for it. |
|
[2025-02-20 04:22:10,959][01130] Component RolloutWorker_w7 process died already! Don't wait for it. |
|
[2025-02-20 04:22:11,040][03373] Weights refcount: 2 0 |
|
[2025-02-20 04:22:11,053][01130] Component InferenceWorker_p0-w0 stopped! |
|
[2025-02-20 04:22:11,054][03373] Stopping InferenceWorker_p0-w0... |
|
[2025-02-20 04:22:11,055][03373] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-02-20 04:22:11,117][03375] Stopping RolloutWorker_w1... |
|
[2025-02-20 04:22:11,117][01130] Component RolloutWorker_w1 stopped! |
|
[2025-02-20 04:22:11,113][03356] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000859_3518464.pth |
|
[2025-02-20 04:22:11,118][03375] Loop rollout_proc1_evt_loop terminating... |
|
[2025-02-20 04:22:11,134][03379] Stopping RolloutWorker_w5... |
|
[2025-02-20 04:22:11,134][01130] Component RolloutWorker_w5 stopped! |
|
[2025-02-20 04:22:11,138][03377] Stopping RolloutWorker_w3... |
|
[2025-02-20 04:22:11,139][01130] Component RolloutWorker_w3 stopped! |
|
[2025-02-20 04:22:11,146][03356] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-02-20 04:22:11,144][03379] Loop rollout_proc5_evt_loop terminating... |
|
[2025-02-20 04:22:11,139][03377] Loop rollout_proc3_evt_loop terminating... |
|
[2025-02-20 04:22:11,241][01130] Component RolloutWorker_w0 stopped! |
|
[2025-02-20 04:22:11,242][03374] Stopping RolloutWorker_w0... |
|
[2025-02-20 04:22:11,244][03374] Loop rollout_proc0_evt_loop terminating... |
|
[2025-02-20 04:22:11,271][01130] Component RolloutWorker_w4 stopped! |
|
[2025-02-20 04:22:11,272][03378] Stopping RolloutWorker_w4... |
|
[2025-02-20 04:22:11,277][03378] Loop rollout_proc4_evt_loop terminating... |
|
[2025-02-20 04:22:11,331][01130] Component LearnerWorker_p0 stopped! |
|
[2025-02-20 04:22:11,332][01130] Waiting for process learner_proc0 to stop... |
|
[2025-02-20 04:22:11,333][03356] Stopping LearnerWorker_p0... |
|
[2025-02-20 04:22:11,334][03356] Loop learner_proc0_evt_loop terminating... |
|
[2025-02-20 04:22:13,191][01130] Waiting for process inference_proc0-0 to join... |
|
[2025-02-20 04:22:13,192][01130] Waiting for process rollout_proc0 to join... |
|
[2025-02-20 04:22:15,005][01130] Waiting for process rollout_proc1 to join... |
|
[2025-02-20 04:22:15,007][01130] Waiting for process rollout_proc2 to join... |
|
[2025-02-20 04:22:15,008][01130] Waiting for process rollout_proc3 to join... |
|
[2025-02-20 04:22:15,019][01130] Waiting for process rollout_proc4 to join... |
|
[2025-02-20 04:22:15,020][01130] Waiting for process rollout_proc5 to join... |
|
[2025-02-20 04:22:15,021][01130] Waiting for process rollout_proc6 to join... |
|
[2025-02-20 04:22:15,021][01130] Waiting for process rollout_proc7 to join... |
|
[2025-02-20 04:22:15,022][01130] Batcher 0 profile tree view: |
|
batching: 23.6600, releasing_batches: 0.0251 |
|
[2025-02-20 04:22:15,025][01130] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0046 |
|
wait_policy_total: 404.3187 |
|
update_model: 9.1334 |
|
weight_update: 0.0015 |
|
one_step: 0.0085 |
|
handle_policy_step: 604.6079 |
|
deserialize: 14.4929, stack: 3.5256, obs_to_device_normalize: 135.4766, forward: 317.7390, send_messages: 22.4666 |
|
prepare_outputs: 84.8388 |
|
to_cpu: 53.2438 |
|
[2025-02-20 04:22:15,026][01130] Learner 0 profile tree view: |
|
misc: 0.0040, prepare_batch: 12.5608 |
|
train: 69.7391 |
|
epoch_init: 0.0058, minibatch_init: 0.0052, losses_postprocess: 0.5688, kl_divergence: 0.5568, after_optimizer: 31.6079 |
|
calculate_losses: 25.7504 |
|
losses_init: 0.0040, forward_head: 1.2900, bptt_initial: 18.3055, tail: 1.0098, advantages_returns: 0.2403, losses: 2.9660 |
|
bptt: 1.7008 |
|
bptt_forward_core: 1.6133 |
|
update: 10.7553 |
|
clip: 0.8581 |
|
[2025-02-20 04:22:15,027][01130] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.3366, enqueue_policy_requests: 184.2141, env_step: 763.8355, overhead: 14.2336, complete_rollouts: 5.4488 |
|
save_policy_outputs: 19.7784 |
|
split_output_tensors: 7.6132 |
|
[2025-02-20 04:22:15,028][01130] Loop Runner_EvtLoop terminating... |
|
[2025-02-20 04:22:15,033][01130] Runner profile tree view: |
|
main_loop: 1081.8759 |
|
[2025-02-20 04:22:15,034][01130] Collected {0: 4005888}, FPS: 3702.7 |
|
[2025-02-20 04:22:15,534][01130] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-20 04:22:15,535][01130] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-02-20 04:22:15,536][01130] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-02-20 04:22:15,537][01130] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-02-20 04:22:15,538][01130] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-20 04:22:15,539][01130] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-02-20 04:22:15,540][01130] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-20 04:22:15,540][01130] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-02-20 04:22:15,541][01130] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-02-20 04:22:15,542][01130] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-02-20 04:22:15,543][01130] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-02-20 04:22:15,544][01130] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-02-20 04:22:15,545][01130] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-02-20 04:22:15,546][01130] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-02-20 04:22:15,547][01130] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-02-20 04:22:15,587][01130] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-20 04:22:15,592][01130] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-20 04:22:15,594][01130] RunningMeanStd input shape: (1,) |
|
[2025-02-20 04:22:15,622][01130] ConvEncoder: input_channels=3 |
|
[2025-02-20 04:22:15,734][01130] Conv encoder output size: 512 |
|
[2025-02-20 04:22:15,735][01130] Policy head output size: 512 |
|
[2025-02-20 04:22:15,916][01130] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-02-20 04:22:16,690][01130] Num frames 100... |
|
[2025-02-20 04:22:16,822][01130] Num frames 200... |
|
[2025-02-20 04:22:16,951][01130] Num frames 300... |
|
[2025-02-20 04:22:17,081][01130] Num frames 400... |
|
[2025-02-20 04:22:17,212][01130] Num frames 500... |
|
[2025-02-20 04:22:17,343][01130] Num frames 600... |
|
[2025-02-20 04:22:17,475][01130] Num frames 700... |
|
[2025-02-20 04:22:17,536][01130] Avg episode rewards: #0: 14.040, true rewards: #0: 7.040 |
|
[2025-02-20 04:22:17,537][01130] Avg episode reward: 14.040, avg true_objective: 7.040 |
|
[2025-02-20 04:22:17,675][01130] Num frames 800... |
|
[2025-02-20 04:22:17,807][01130] Num frames 900... |
|
[2025-02-20 04:22:17,937][01130] Num frames 1000... |
|
[2025-02-20 04:22:18,090][01130] Avg episode rewards: #0: 9.865, true rewards: #0: 5.365 |
|
[2025-02-20 04:22:18,090][01130] Avg episode reward: 9.865, avg true_objective: 5.365 |
|
[2025-02-20 04:22:18,130][01130] Num frames 1100... |
|
[2025-02-20 04:22:18,264][01130] Num frames 1200... |
|
[2025-02-20 04:22:18,393][01130] Num frames 1300... |
|
[2025-02-20 04:22:18,520][01130] Num frames 1400... |
|
[2025-02-20 04:22:18,653][01130] Num frames 1500... |
|
[2025-02-20 04:22:18,792][01130] Num frames 1600... |
|
[2025-02-20 04:22:18,922][01130] Num frames 1700... |
|
[2025-02-20 04:22:19,052][01130] Num frames 1800... |
|
[2025-02-20 04:22:19,185][01130] Num frames 1900... |
|
[2025-02-20 04:22:19,314][01130] Num frames 2000... |
|
[2025-02-20 04:22:19,442][01130] Num frames 2100... |
|
[2025-02-20 04:22:19,573][01130] Num frames 2200... |
|
[2025-02-20 04:22:19,717][01130] Num frames 2300... |
|
[2025-02-20 04:22:19,846][01130] Num frames 2400... |
|
[2025-02-20 04:22:19,966][01130] Avg episode rewards: #0: 17.830, true rewards: #0: 8.163 |
|
[2025-02-20 04:22:19,967][01130] Avg episode reward: 17.830, avg true_objective: 8.163 |
|
[2025-02-20 04:22:20,034][01130] Num frames 2500... |
|
[2025-02-20 04:22:20,163][01130] Num frames 2600... |
|
[2025-02-20 04:22:20,294][01130] Num frames 2700... |
|
[2025-02-20 04:22:20,422][01130] Num frames 2800... |
|
[2025-02-20 04:22:20,549][01130] Num frames 2900... |
|
[2025-02-20 04:22:20,682][01130] Num frames 3000... |
|
[2025-02-20 04:22:20,819][01130] Num frames 3100... |
|
[2025-02-20 04:22:20,948][01130] Num frames 3200... |
|
[2025-02-20 04:22:21,080][01130] Num frames 3300... |
|
[2025-02-20 04:22:21,213][01130] Num frames 3400... |
|
[2025-02-20 04:22:21,343][01130] Num frames 3500... |
|
[2025-02-20 04:22:21,472][01130] Num frames 3600... |
|
[2025-02-20 04:22:21,599][01130] Num frames 3700... |
|
[2025-02-20 04:22:21,732][01130] Num frames 3800... |
|
[2025-02-20 04:22:21,867][01130] Num frames 3900... |
|
[2025-02-20 04:22:21,995][01130] Num frames 4000... |
|
[2025-02-20 04:22:22,125][01130] Num frames 4100... |
|
[2025-02-20 04:22:22,251][01130] Num frames 4200... |
|
[2025-02-20 04:22:22,364][01130] Avg episode rewards: #0: 25.857, true rewards: #0: 10.607 |
|
[2025-02-20 04:22:22,365][01130] Avg episode reward: 25.857, avg true_objective: 10.607 |
|
[2025-02-20 04:22:22,442][01130] Num frames 4300... |
|
[2025-02-20 04:22:22,568][01130] Num frames 4400... |
|
[2025-02-20 04:22:22,698][01130] Num frames 4500... |
|
[2025-02-20 04:22:22,832][01130] Num frames 4600... |
|
[2025-02-20 04:22:22,957][01130] Num frames 4700... |
|
[2025-02-20 04:22:23,085][01130] Num frames 4800... |
|
[2025-02-20 04:22:23,165][01130] Avg episode rewards: #0: 22.838, true rewards: #0: 9.638 |
|
[2025-02-20 04:22:23,166][01130] Avg episode reward: 22.838, avg true_objective: 9.638 |
|
[2025-02-20 04:22:23,270][01130] Num frames 4900... |
|
[2025-02-20 04:22:23,398][01130] Num frames 5000... |
|
[2025-02-20 04:22:23,525][01130] Num frames 5100... |
|
[2025-02-20 04:22:23,658][01130] Num frames 5200... |
|
[2025-02-20 04:22:23,784][01130] Num frames 5300... |
|
[2025-02-20 04:22:23,920][01130] Num frames 5400... |
|
[2025-02-20 04:22:24,048][01130] Num frames 5500... |
|
[2025-02-20 04:22:24,133][01130] Avg episode rewards: #0: 21.038, true rewards: #0: 9.205 |
|
[2025-02-20 04:22:24,134][01130] Avg episode reward: 21.038, avg true_objective: 9.205 |
|
[2025-02-20 04:22:24,234][01130] Num frames 5600... |
|
[2025-02-20 04:22:24,362][01130] Num frames 5700... |
|
[2025-02-20 04:22:24,489][01130] Num frames 5800... |
|
[2025-02-20 04:22:24,621][01130] Num frames 5900... |
|
[2025-02-20 04:22:24,749][01130] Num frames 6000... |
|
[2025-02-20 04:22:24,886][01130] Num frames 6100... |
|
[2025-02-20 04:22:25,016][01130] Num frames 6200... |
|
[2025-02-20 04:22:25,147][01130] Num frames 6300... |
|
[2025-02-20 04:22:25,297][01130] Num frames 6400... |
|
[2025-02-20 04:22:25,476][01130] Num frames 6500... |
|
[2025-02-20 04:22:25,654][01130] Num frames 6600... |
|
[2025-02-20 04:22:25,828][01130] Num frames 6700... |
|
[2025-02-20 04:22:25,937][01130] Avg episode rewards: #0: 22.607, true rewards: #0: 9.607 |
|
[2025-02-20 04:22:25,938][01130] Avg episode reward: 22.607, avg true_objective: 9.607 |
|
[2025-02-20 04:22:26,067][01130] Num frames 6800... |
|
[2025-02-20 04:22:26,238][01130] Num frames 6900... |
|
[2025-02-20 04:22:26,404][01130] Num frames 7000... |
|
[2025-02-20 04:22:26,577][01130] Num frames 7100... |
|
[2025-02-20 04:22:26,754][01130] Num frames 7200... |
|
[2025-02-20 04:22:26,940][01130] Num frames 7300... |
|
[2025-02-20 04:22:27,032][01130] Avg episode rewards: #0: 21.149, true rewards: #0: 9.149 |
|
[2025-02-20 04:22:27,033][01130] Avg episode reward: 21.149, avg true_objective: 9.149 |
|
[2025-02-20 04:22:27,161][01130] Num frames 7400... |
|
[2025-02-20 04:22:27,287][01130] Num frames 7500... |
|
[2025-02-20 04:22:27,416][01130] Num frames 7600... |
|
[2025-02-20 04:22:27,544][01130] Num frames 7700... |
|
[2025-02-20 04:22:27,603][01130] Avg episode rewards: #0: 19.226, true rewards: #0: 8.559 |
|
[2025-02-20 04:22:27,604][01130] Avg episode reward: 19.226, avg true_objective: 8.559 |
|
[2025-02-20 04:22:27,730][01130] Num frames 7800... |
|
[2025-02-20 04:22:27,867][01130] Num frames 7900... |
|
[2025-02-20 04:22:28,007][01130] Num frames 8000... |
|
[2025-02-20 04:22:28,137][01130] Num frames 8100... |
|
[2025-02-20 04:22:28,308][01130] Num frames 8200... |
|
[2025-02-20 04:22:28,441][01130] Num frames 8300... |
|
[2025-02-20 04:22:28,579][01130] Num frames 8400... |
|
[2025-02-20 04:22:28,716][01130] Num frames 8500... |
|
[2025-02-20 04:22:28,850][01130] Num frames 8600... |
|
[2025-02-20 04:22:28,989][01130] Num frames 8700... |
|
[2025-02-20 04:22:29,123][01130] Num frames 8800... |
|
[2025-02-20 04:22:29,255][01130] Num frames 8900... |
|
[2025-02-20 04:22:29,386][01130] Num frames 9000... |
|
[2025-02-20 04:22:29,518][01130] Num frames 9100... |
|
[2025-02-20 04:22:29,654][01130] Num frames 9200... |
|
[2025-02-20 04:22:29,782][01130] Num frames 9300... |
|
[2025-02-20 04:22:29,918][01130] Num frames 9400... |
|
[2025-02-20 04:22:30,060][01130] Num frames 9500... |
|
[2025-02-20 04:22:30,154][01130] Avg episode rewards: #0: 21.627, true rewards: #0: 9.527 |
|
[2025-02-20 04:22:30,154][01130] Avg episode reward: 21.627, avg true_objective: 9.527 |
|
[2025-02-20 04:23:31,148][01130] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-02-20 04:41:20,155][01130] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-20 04:41:20,156][01130] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-02-20 04:41:20,157][01130] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-02-20 04:41:20,158][01130] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-02-20 04:41:20,159][01130] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-20 04:41:20,160][01130] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-02-20 04:41:20,161][01130] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-02-20 04:41:20,162][01130] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-02-20 04:41:20,162][01130] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-02-20 04:41:20,163][01130] Adding new argument 'hf_repository'='cdr6934/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-02-20 04:41:20,164][01130] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-02-20 04:41:20,165][01130] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-02-20 04:41:20,166][01130] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-02-20 04:41:20,167][01130] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-02-20 04:41:20,167][01130] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-02-20 04:41:20,195][01130] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-20 04:41:20,197][01130] RunningMeanStd input shape: (1,) |
|
[2025-02-20 04:41:20,210][01130] ConvEncoder: input_channels=3 |
|
[2025-02-20 04:41:20,245][01130] Conv encoder output size: 512 |
|
[2025-02-20 04:41:20,246][01130] Policy head output size: 512 |
|
[2025-02-20 04:41:20,264][01130] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-02-20 04:41:20,720][01130] Num frames 100... |
|
[2025-02-20 04:41:20,847][01130] Num frames 200... |
|
[2025-02-20 04:41:20,972][01130] Num frames 300... |
|
[2025-02-20 04:41:21,099][01130] Num frames 400... |
|
[2025-02-20 04:41:21,230][01130] Num frames 500... |
|
[2025-02-20 04:41:21,361][01130] Num frames 600... |
|
[2025-02-20 04:41:21,495][01130] Num frames 700... |
|
[2025-02-20 04:41:21,626][01130] Num frames 800... |
|
[2025-02-20 04:41:21,751][01130] Num frames 900... |
|
[2025-02-20 04:41:21,875][01130] Num frames 1000... |
|
[2025-02-20 04:41:22,003][01130] Num frames 1100... |
|
[2025-02-20 04:41:22,134][01130] Num frames 1200... |
|
[2025-02-20 04:41:22,286][01130] Num frames 1300... |
|
[2025-02-20 04:41:22,421][01130] Num frames 1400... |
|
[2025-02-20 04:41:22,570][01130] Num frames 1500... |
|
[2025-02-20 04:41:22,643][01130] Avg episode rewards: #0: 37.140, true rewards: #0: 15.140 |
|
[2025-02-20 04:41:22,644][01130] Avg episode reward: 37.140, avg true_objective: 15.140 |
|
[2025-02-20 04:41:22,757][01130] Num frames 1600... |
|
[2025-02-20 04:41:22,890][01130] Num frames 1700... |
|
[2025-02-20 04:41:23,019][01130] Num frames 1800... |
|
[2025-02-20 04:41:23,147][01130] Num frames 1900... |
|
[2025-02-20 04:41:23,282][01130] Num frames 2000... |
|
[2025-02-20 04:41:23,409][01130] Num frames 2100... |
|
[2025-02-20 04:41:23,535][01130] Avg episode rewards: #0: 24.270, true rewards: #0: 10.770 |
|
[2025-02-20 04:41:23,536][01130] Avg episode reward: 24.270, avg true_objective: 10.770 |
|
[2025-02-20 04:41:23,601][01130] Num frames 2200... |
|
[2025-02-20 04:41:23,731][01130] Num frames 2300... |
|
[2025-02-20 04:41:23,865][01130] Num frames 2400... |
|
[2025-02-20 04:41:23,997][01130] Num frames 2500... |
|
[2025-02-20 04:41:24,136][01130] Num frames 2600... |
|
[2025-02-20 04:41:24,273][01130] Num frames 2700... |
|
[2025-02-20 04:41:24,401][01130] Num frames 2800... |
|
[2025-02-20 04:41:24,531][01130] Num frames 2900... |
|
[2025-02-20 04:41:24,671][01130] Num frames 3000... |
|
[2025-02-20 04:41:24,800][01130] Num frames 3100... |
|
[2025-02-20 04:41:24,929][01130] Num frames 3200... |
|
[2025-02-20 04:41:25,060][01130] Num frames 3300... |
|
[2025-02-20 04:41:25,203][01130] Avg episode rewards: #0: 24.567, true rewards: #0: 11.233 |
|
[2025-02-20 04:41:25,204][01130] Avg episode reward: 24.567, avg true_objective: 11.233 |
|
[2025-02-20 04:41:25,244][01130] Num frames 3400... |
|
[2025-02-20 04:41:25,372][01130] Num frames 3500... |
|
[2025-02-20 04:41:25,510][01130] Num frames 3600... |
|
[2025-02-20 04:41:25,703][01130] Num frames 3700... |
|
[2025-02-20 04:41:25,891][01130] Num frames 3800... |
|
[2025-02-20 04:41:26,067][01130] Num frames 3900... |
|
[2025-02-20 04:41:26,244][01130] Num frames 4000... |
|
[2025-02-20 04:41:26,414][01130] Num frames 4100... |
|
[2025-02-20 04:41:26,539][01130] Avg episode rewards: #0: 23.345, true rewards: #0: 10.345 |
|
[2025-02-20 04:41:26,540][01130] Avg episode reward: 23.345, avg true_objective: 10.345 |
|
[2025-02-20 04:41:26,651][01130] Num frames 4200... |
|
[2025-02-20 04:41:26,821][01130] Num frames 4300... |
|
[2025-02-20 04:41:26,995][01130] Num frames 4400... |
|
[2025-02-20 04:41:27,175][01130] Num frames 4500... |
|
[2025-02-20 04:41:27,353][01130] Num frames 4600... |
|
[2025-02-20 04:41:27,465][01130] Avg episode rewards: #0: 20.858, true rewards: #0: 9.258 |
|
[2025-02-20 04:41:27,467][01130] Avg episode reward: 20.858, avg true_objective: 9.258 |
|
[2025-02-20 04:41:27,595][01130] Num frames 4700... |
|
[2025-02-20 04:41:27,787][01130] Num frames 4800... |
|
[2025-02-20 04:41:27,950][01130] Avg episode rewards: #0: 17.972, true rewards: #0: 8.138 |
|
[2025-02-20 04:41:27,951][01130] Avg episode reward: 17.972, avg true_objective: 8.138 |
|
[2025-02-20 04:41:27,977][01130] Num frames 4900... |
|
[2025-02-20 04:41:28,106][01130] Num frames 5000... |
|
[2025-02-20 04:41:28,240][01130] Num frames 5100... |
|
[2025-02-20 04:41:28,374][01130] Num frames 5200... |
|
[2025-02-20 04:41:28,505][01130] Num frames 5300... |
|
[2025-02-20 04:41:28,643][01130] Num frames 5400... |
|
[2025-02-20 04:41:28,781][01130] Num frames 5500... |
|
[2025-02-20 04:41:28,917][01130] Num frames 5600... |
|
[2025-02-20 04:41:29,050][01130] Num frames 5700... |
|
[2025-02-20 04:41:29,184][01130] Num frames 5800... |
|
[2025-02-20 04:41:29,315][01130] Num frames 5900... |
|
[2025-02-20 04:41:29,444][01130] Num frames 6000... |
|
[2025-02-20 04:41:29,575][01130] Num frames 6100... |
|
[2025-02-20 04:41:29,709][01130] Num frames 6200... |
|
[2025-02-20 04:41:29,845][01130] Num frames 6300... |
|
[2025-02-20 04:41:29,974][01130] Num frames 6400... |
|
[2025-02-20 04:41:30,102][01130] Num frames 6500... |
|
[2025-02-20 04:41:30,239][01130] Num frames 6600... |
|
[2025-02-20 04:41:30,366][01130] Num frames 6700... |
|
[2025-02-20 04:41:30,500][01130] Num frames 6800... |
|
[2025-02-20 04:41:30,634][01130] Num frames 6900... |
|
[2025-02-20 04:41:30,805][01130] Avg episode rewards: #0: 22.976, true rewards: #0: 9.976 |
|
[2025-02-20 04:41:30,807][01130] Avg episode reward: 22.976, avg true_objective: 9.976 |
|
[2025-02-20 04:41:30,831][01130] Num frames 7000... |
|
[2025-02-20 04:41:30,962][01130] Num frames 7100... |
|
[2025-02-20 04:41:31,093][01130] Num frames 7200... |
|
[2025-02-20 04:41:31,226][01130] Num frames 7300... |
|
[2025-02-20 04:41:31,358][01130] Num frames 7400... |
|
[2025-02-20 04:41:31,487][01130] Num frames 7500... |
|
[2025-02-20 04:41:31,620][01130] Num frames 7600... |
|
[2025-02-20 04:41:31,745][01130] Num frames 7700... |
|
[2025-02-20 04:41:31,885][01130] Num frames 7800... |
|
[2025-02-20 04:41:32,017][01130] Num frames 7900... |
|
[2025-02-20 04:41:32,149][01130] Num frames 8000... |
|
[2025-02-20 04:41:32,325][01130] Avg episode rewards: #0: 23.236, true rewards: #0: 10.111 |
|
[2025-02-20 04:41:32,326][01130] Avg episode reward: 23.236, avg true_objective: 10.111 |
|
[2025-02-20 04:41:32,343][01130] Num frames 8100... |
|
[2025-02-20 04:41:32,470][01130] Num frames 8200... |
|
[2025-02-20 04:41:32,599][01130] Num frames 8300... |
|
[2025-02-20 04:41:32,727][01130] Num frames 8400... |
|
[2025-02-20 04:41:32,877][01130] Num frames 8500... |
|
[2025-02-20 04:41:33,007][01130] Num frames 8600... |
|
[2025-02-20 04:41:33,140][01130] Num frames 8700... |
|
[2025-02-20 04:41:33,275][01130] Num frames 8800... |
|
[2025-02-20 04:41:33,405][01130] Num frames 8900... |
|
[2025-02-20 04:41:33,537][01130] Num frames 9000... |
|
[2025-02-20 04:41:33,676][01130] Num frames 9100... |
|
[2025-02-20 04:41:33,831][01130] Num frames 9200... |
|
[2025-02-20 04:41:33,964][01130] Num frames 9300... |
|
[2025-02-20 04:41:34,093][01130] Num frames 9400... |
|
[2025-02-20 04:41:34,227][01130] Num frames 9500... |
|
[2025-02-20 04:41:34,357][01130] Num frames 9600... |
|
[2025-02-20 04:41:34,489][01130] Num frames 9700... |
|
[2025-02-20 04:41:34,624][01130] Num frames 9800... |
|
[2025-02-20 04:41:34,761][01130] Num frames 9900... |
|
[2025-02-20 04:41:34,910][01130] Num frames 10000... |
|
[2025-02-20 04:41:35,043][01130] Num frames 10100... |
|
[2025-02-20 04:41:35,216][01130] Avg episode rewards: #0: 26.877, true rewards: #0: 11.321 |
|
[2025-02-20 04:41:35,217][01130] Avg episode reward: 26.877, avg true_objective: 11.321 |
|
[2025-02-20 04:41:35,233][01130] Num frames 10200... |
|
[2025-02-20 04:41:35,363][01130] Num frames 10300... |
|
[2025-02-20 04:41:35,491][01130] Num frames 10400... |
|
[2025-02-20 04:41:35,620][01130] Num frames 10500... |
|
[2025-02-20 04:41:35,745][01130] Num frames 10600... |
|
[2025-02-20 04:41:35,877][01130] Num frames 10700... |
|
[2025-02-20 04:41:35,985][01130] Avg episode rewards: #0: 25.233, true rewards: #0: 10.733 |
|
[2025-02-20 04:41:35,986][01130] Avg episode reward: 25.233, avg true_objective: 10.733 |
|
[2025-02-20 04:42:43,767][01130] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-02-20 04:42:52,079][01130] The model has been pushed to https://huggingface.co/cdr6934/rl_course_vizdoom_health_gathering_supreme |
|
[2025-02-20 04:46:53,963][01130] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-20 04:46:53,964][01130] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-02-20 04:46:53,966][01130] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-02-20 04:46:53,967][01130] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-20 04:46:53,969][01130] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-02-20 04:46:53,970][01130] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-02-20 04:46:53,972][01130] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-02-20 04:46:53,974][01130] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-02-20 04:46:53,975][01130] Adding new argument 'hf_repository'='cdr6934/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-02-20 04:46:53,978][01130] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-02-20 04:46:53,978][01130] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-02-20 04:46:53,980][01130] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-02-20 04:46:53,984][01130] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-02-20 04:46:53,985][01130] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-02-20 04:46:54,029][01130] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-20 04:46:54,031][01130] RunningMeanStd input shape: (1,) |
|
[2025-02-20 04:46:54,049][01130] ConvEncoder: input_channels=3 |
|
[2025-02-20 04:46:54,102][01130] Conv encoder output size: 512 |
|
[2025-02-20 04:46:54,105][01130] Policy head output size: 512 |
|
[2025-02-20 04:46:54,132][01130] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-02-20 04:46:54,593][01130] Num frames 100... |
|
[2025-02-20 04:46:54,732][01130] Num frames 200... |
|
[2025-02-20 04:46:54,859][01130] Num frames 300... |
|
[2025-02-20 04:46:54,994][01130] Num frames 400... |
|
[2025-02-20 04:46:55,131][01130] Num frames 500... |
|
[2025-02-20 04:46:55,267][01130] Num frames 600... |
|
[2025-02-20 04:46:55,398][01130] Num frames 700... |
|
[2025-02-20 04:46:55,529][01130] Num frames 800... |
|
[2025-02-20 04:46:55,709][01130] Avg episode rewards: #0: 15.960, true rewards: #0: 8.960 |
|
[2025-02-20 04:46:55,710][01130] Avg episode reward: 15.960, avg true_objective: 8.960 |
|
[2025-02-20 04:46:55,718][01130] Num frames 900... |
|
[2025-02-20 04:46:55,847][01130] Num frames 1000... |
|
[2025-02-20 04:46:55,975][01130] Num frames 1100... |
|
[2025-02-20 04:46:56,111][01130] Num frames 1200... |
|
[2025-02-20 04:46:56,242][01130] Num frames 1300... |
|
[2025-02-20 04:46:56,373][01130] Num frames 1400... |
|
[2025-02-20 04:46:56,501][01130] Num frames 1500... |
|
[2025-02-20 04:46:56,631][01130] Num frames 1600... |
|
[2025-02-20 04:46:56,756][01130] Num frames 1700... |
|
[2025-02-20 04:46:56,884][01130] Num frames 1800... |
|
[2025-02-20 04:46:57,013][01130] Num frames 1900... |
|
[2025-02-20 04:46:57,148][01130] Num frames 2000... |
|
[2025-02-20 04:46:57,276][01130] Num frames 2100... |
|
[2025-02-20 04:46:57,401][01130] Num frames 2200... |
|
[2025-02-20 04:46:57,531][01130] Num frames 2300... |
|
[2025-02-20 04:46:57,663][01130] Num frames 2400... |
|
[2025-02-20 04:46:57,792][01130] Num frames 2500... |
|
[2025-02-20 04:46:57,923][01130] Num frames 2600... |
|
[2025-02-20 04:46:58,053][01130] Num frames 2700... |
|
[2025-02-20 04:46:58,245][01130] Avg episode rewards: #0: 34.495, true rewards: #0: 13.995 |
|
[2025-02-20 04:46:58,246][01130] Avg episode reward: 34.495, avg true_objective: 13.995 |
|
[2025-02-20 04:46:58,248][01130] Num frames 2800... |
|
[2025-02-20 04:46:58,380][01130] Num frames 2900... |
|
[2025-02-20 04:46:58,509][01130] Num frames 3000... |
|
[2025-02-20 04:46:58,642][01130] Num frames 3100... |
|
[2025-02-20 04:46:58,771][01130] Num frames 3200... |
|
[2025-02-20 04:46:58,900][01130] Num frames 3300... |
|
[2025-02-20 04:46:59,033][01130] Num frames 3400... |
|
[2025-02-20 04:46:59,180][01130] Num frames 3500... |
|
[2025-02-20 04:46:59,308][01130] Num frames 3600... |
|
[2025-02-20 04:46:59,438][01130] Num frames 3700... |
|
[2025-02-20 04:46:59,568][01130] Num frames 3800... |
|
[2025-02-20 04:46:59,699][01130] Num frames 3900... |
|
[2025-02-20 04:46:59,842][01130] Num frames 4000... |
|
[2025-02-20 04:46:59,976][01130] Num frames 4100... |
|
[2025-02-20 04:47:00,107][01130] Num frames 4200... |
|
[2025-02-20 04:47:00,247][01130] Num frames 4300... |
|
[2025-02-20 04:47:00,310][01130] Avg episode rewards: #0: 35.350, true rewards: #0: 14.350 |
|
[2025-02-20 04:47:00,311][01130] Avg episode reward: 35.350, avg true_objective: 14.350 |
|
[2025-02-20 04:47:00,431][01130] Num frames 4400... |
|
[2025-02-20 04:47:00,557][01130] Num frames 4500... |
|
[2025-02-20 04:47:00,688][01130] Num frames 4600... |
|
[2025-02-20 04:47:00,815][01130] Num frames 4700... |
|
[2025-02-20 04:47:00,946][01130] Num frames 4800... |
|
[2025-02-20 04:47:01,048][01130] Avg episode rewards: #0: 29.087, true rewards: #0: 12.087 |
|
[2025-02-20 04:47:01,049][01130] Avg episode reward: 29.087, avg true_objective: 12.087 |
|
[2025-02-20 04:47:01,132][01130] Num frames 4900... |
|
[2025-02-20 04:47:01,270][01130] Num frames 5000... |
|
[2025-02-20 04:47:01,400][01130] Num frames 5100... |
|
[2025-02-20 04:47:01,530][01130] Num frames 5200... |
|
[2025-02-20 04:47:01,661][01130] Num frames 5300... |
|
[2025-02-20 04:47:01,789][01130] Num frames 5400... |
|
[2025-02-20 04:47:01,918][01130] Num frames 5500... |
|
[2025-02-20 04:47:02,047][01130] Num frames 5600... |
|
[2025-02-20 04:47:02,181][01130] Num frames 5700... |
|
[2025-02-20 04:47:02,315][01130] Num frames 5800... |
|
[2025-02-20 04:47:02,443][01130] Num frames 5900... |
|
[2025-02-20 04:47:02,570][01130] Num frames 6000... |
|
[2025-02-20 04:47:02,703][01130] Num frames 6100... |
|
[2025-02-20 04:47:02,795][01130] Avg episode rewards: #0: 29.652, true rewards: #0: 12.252 |
|
[2025-02-20 04:47:02,796][01130] Avg episode reward: 29.652, avg true_objective: 12.252 |
|
[2025-02-20 04:47:02,895][01130] Num frames 6200... |
|
[2025-02-20 04:47:03,024][01130] Num frames 6300... |
|
[2025-02-20 04:47:03,152][01130] Num frames 6400... |
|
[2025-02-20 04:47:03,291][01130] Num frames 6500... |
|
[2025-02-20 04:47:03,417][01130] Num frames 6600... |
|
[2025-02-20 04:47:03,550][01130] Num frames 6700... |
|
[2025-02-20 04:47:03,700][01130] Num frames 6800... |
|
[2025-02-20 04:47:03,832][01130] Num frames 6900... |
|
[2025-02-20 04:47:03,968][01130] Num frames 7000... |
|
[2025-02-20 04:47:04,097][01130] Num frames 7100... |
|
[2025-02-20 04:47:04,233][01130] Num frames 7200... |
|
[2025-02-20 04:47:04,429][01130] Num frames 7300... |
|
[2025-02-20 04:47:04,615][01130] Num frames 7400... |
|
[2025-02-20 04:47:04,787][01130] Num frames 7500... |
|
[2025-02-20 04:47:04,956][01130] Num frames 7600... |
|
[2025-02-20 04:47:05,130][01130] Num frames 7700... |
|
[2025-02-20 04:47:05,301][01130] Num frames 7800... |
|
[2025-02-20 04:47:05,482][01130] Num frames 7900... |
|
[2025-02-20 04:47:05,571][01130] Avg episode rewards: #0: 33.030, true rewards: #0: 13.197 |
|
[2025-02-20 04:47:05,572][01130] Avg episode reward: 33.030, avg true_objective: 13.197 |
|
[2025-02-20 04:47:05,726][01130] Num frames 8000... |
|
[2025-02-20 04:47:05,897][01130] Num frames 8100... |
|
[2025-02-20 04:47:06,087][01130] Num frames 8200... |
|
[2025-02-20 04:47:06,269][01130] Num frames 8300... |
|
[2025-02-20 04:47:06,429][01130] Num frames 8400... |
|
[2025-02-20 04:47:06,565][01130] Avg episode rewards: #0: 29.517, true rewards: #0: 12.089 |
|
[2025-02-20 04:47:06,566][01130] Avg episode reward: 29.517, avg true_objective: 12.089 |
|
[2025-02-20 04:47:06,618][01130] Num frames 8500... |
|
[2025-02-20 04:47:06,745][01130] Num frames 8600... |
|
[2025-02-20 04:47:06,873][01130] Num frames 8700... |
|
[2025-02-20 04:47:07,011][01130] Num frames 8800... |
|
[2025-02-20 04:47:07,139][01130] Num frames 8900... |
|
[2025-02-20 04:47:07,270][01130] Num frames 9000... |
|
[2025-02-20 04:47:07,399][01130] Num frames 9100... |
|
[2025-02-20 04:47:07,533][01130] Num frames 9200... |
|
[2025-02-20 04:47:07,666][01130] Num frames 9300... |
|
[2025-02-20 04:47:07,832][01130] Avg episode rewards: #0: 28.237, true rewards: #0: 11.737 |
|
[2025-02-20 04:47:07,833][01130] Avg episode reward: 28.237, avg true_objective: 11.737 |
|
[2025-02-20 04:47:07,848][01130] Num frames 9400... |
|
[2025-02-20 04:47:07,974][01130] Num frames 9500... |
|
[2025-02-20 04:47:08,103][01130] Num frames 9600... |
|
[2025-02-20 04:47:08,234][01130] Num frames 9700... |
|
[2025-02-20 04:47:08,363][01130] Num frames 9800... |
|
[2025-02-20 04:47:08,497][01130] Num frames 9900... |
|
[2025-02-20 04:47:08,627][01130] Num frames 10000... |
|
[2025-02-20 04:47:08,759][01130] Num frames 10100... |
|
[2025-02-20 04:47:08,887][01130] Num frames 10200... |
|
[2025-02-20 04:47:09,014][01130] Avg episode rewards: #0: 27.282, true rewards: #0: 11.393 |
|
[2025-02-20 04:47:09,015][01130] Avg episode reward: 27.282, avg true_objective: 11.393 |
|
[2025-02-20 04:47:09,076][01130] Num frames 10300... |
|
[2025-02-20 04:47:09,205][01130] Num frames 10400... |
|
[2025-02-20 04:47:09,332][01130] Num frames 10500... |
|
[2025-02-20 04:47:09,458][01130] Num frames 10600... |
|
[2025-02-20 04:47:09,594][01130] Num frames 10700... |
|
[2025-02-20 04:47:09,779][01130] Avg episode rewards: #0: 25.698, true rewards: #0: 10.798 |
|
[2025-02-20 04:47:09,780][01130] Avg episode reward: 25.698, avg true_objective: 10.798 |
|
[2025-02-20 04:47:09,784][01130] Num frames 10800... |
|
[2025-02-20 04:48:16,420][01130] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|