|
[2025-03-08 08:43:53,248][00316] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-03-08 08:43:53,250][00316] Rollout worker 0 uses device cpu |
|
[2025-03-08 08:43:53,251][00316] Rollout worker 1 uses device cpu |
|
[2025-03-08 08:43:53,252][00316] Rollout worker 2 uses device cpu |
|
[2025-03-08 08:43:53,253][00316] Rollout worker 3 uses device cpu |
|
[2025-03-08 08:43:53,253][00316] Rollout worker 4 uses device cpu |
|
[2025-03-08 08:43:53,254][00316] Rollout worker 5 uses device cpu |
|
[2025-03-08 08:43:53,255][00316] Rollout worker 6 uses device cpu |
|
[2025-03-08 08:43:53,256][00316] Rollout worker 7 uses device cpu |
|
[2025-03-08 08:43:53,401][00316] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-08 08:43:53,402][00316] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-03-08 08:43:53,436][00316] Starting all processes... |
|
[2025-03-08 08:43:53,437][00316] Starting process learner_proc0 |
|
[2025-03-08 08:43:53,590][00316] Starting all processes... |
|
[2025-03-08 08:43:53,602][00316] Starting process inference_proc0-0 |
|
[2025-03-08 08:43:53,603][00316] Starting process rollout_proc0 |
|
[2025-03-08 08:43:53,603][00316] Starting process rollout_proc1 |
|
[2025-03-08 08:43:53,604][00316] Starting process rollout_proc2 |
|
[2025-03-08 08:43:53,604][00316] Starting process rollout_proc3 |
|
[2025-03-08 08:43:53,604][00316] Starting process rollout_proc4 |
|
[2025-03-08 08:43:53,604][00316] Starting process rollout_proc5 |
|
[2025-03-08 08:43:53,604][00316] Starting process rollout_proc6 |
|
[2025-03-08 08:43:53,604][00316] Starting process rollout_proc7 |
|
[2025-03-08 08:44:09,810][03197] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-08 08:44:09,827][03197] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-03-08 08:44:09,904][03197] Num visible devices: 1 |
|
[2025-03-08 08:44:09,956][03197] Starting seed is not provided |
|
[2025-03-08 08:44:09,956][03197] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-08 08:44:09,956][03197] Initializing actor-critic model on device cuda:0 |
|
[2025-03-08 08:44:09,957][03197] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-08 08:44:09,960][03197] RunningMeanStd input shape: (1,) |
|
[2025-03-08 08:44:10,005][03212] Worker 1 uses CPU cores [1] |
|
[2025-03-08 08:44:10,051][03211] Worker 0 uses CPU cores [0] |
|
[2025-03-08 08:44:10,085][03215] Worker 4 uses CPU cores [0] |
|
[2025-03-08 08:44:10,090][03214] Worker 2 uses CPU cores [0] |
|
[2025-03-08 08:44:10,258][03217] Worker 6 uses CPU cores [0] |
|
[2025-03-08 08:44:10,279][03218] Worker 7 uses CPU cores [1] |
|
[2025-03-08 08:44:10,341][03213] Worker 3 uses CPU cores [1] |
|
[2025-03-08 08:44:10,365][03216] Worker 5 uses CPU cores [1] |
|
[2025-03-08 08:44:10,371][03197] ConvEncoder: input_channels=3 |
|
[2025-03-08 08:44:10,429][03210] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-08 08:44:10,429][03210] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-03-08 08:44:10,447][03210] Num visible devices: 1 |
|
[2025-03-08 08:44:10,652][03197] Conv encoder output size: 512 |
|
[2025-03-08 08:44:10,652][03197] Policy head output size: 512 |
|
[2025-03-08 08:44:10,705][03197] Created Actor Critic model with architecture: |
|
[2025-03-08 08:44:10,706][03197] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-03-08 08:44:11,050][03197] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-03-08 08:44:13,402][00316] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-03-08 08:44:13,409][00316] Heartbeat connected on RolloutWorker_w0 |
|
[2025-03-08 08:44:13,416][00316] Heartbeat connected on RolloutWorker_w2 |
|
[2025-03-08 08:44:13,417][00316] Heartbeat connected on RolloutWorker_w1 |
|
[2025-03-08 08:44:13,419][00316] Heartbeat connected on RolloutWorker_w3 |
|
[2025-03-08 08:44:13,423][00316] Heartbeat connected on RolloutWorker_w4 |
|
[2025-03-08 08:44:13,429][00316] Heartbeat connected on RolloutWorker_w6 |
|
[2025-03-08 08:44:13,431][00316] Heartbeat connected on RolloutWorker_w5 |
|
[2025-03-08 08:44:13,440][00316] Heartbeat connected on RolloutWorker_w7 |
|
[2025-03-08 08:44:13,487][00316] Heartbeat connected on Batcher_0 |
|
[2025-03-08 08:44:15,271][03197] No checkpoints found |
|
[2025-03-08 08:44:15,271][03197] Did not load from checkpoint, starting from scratch! |
|
[2025-03-08 08:44:15,271][03197] Initialized policy 0 weights for model version 0 |
|
[2025-03-08 08:44:15,274][03197] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-08 08:44:15,283][03197] LearnerWorker_p0 finished initialization! |
|
[2025-03-08 08:44:15,284][00316] Heartbeat connected on LearnerWorker_p0 |
|
[2025-03-08 08:44:15,522][03210] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-08 08:44:15,523][03210] RunningMeanStd input shape: (1,) |
|
[2025-03-08 08:44:15,589][03210] ConvEncoder: input_channels=3 |
|
[2025-03-08 08:44:15,693][03210] Conv encoder output size: 512 |
|
[2025-03-08 08:44:15,694][03210] Policy head output size: 512 |
|
[2025-03-08 08:44:15,729][00316] Inference worker 0-0 is ready! |
|
[2025-03-08 08:44:15,730][00316] All inference workers are ready! Signal rollout workers to start! |
|
[2025-03-08 08:44:16,099][03214] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 08:44:16,148][03218] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 08:44:16,158][03212] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 08:44:16,151][03213] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 08:44:16,259][03211] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 08:44:16,279][03215] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 08:44:16,298][03217] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 08:44:16,329][03216] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 08:44:17,440][00316] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-03-08 08:44:18,443][03215] Decorrelating experience for 0 frames... |
|
[2025-03-08 08:44:18,443][03218] Decorrelating experience for 0 frames... |
|
[2025-03-08 08:44:18,444][03211] Decorrelating experience for 0 frames... |
|
[2025-03-08 08:44:18,445][03216] Decorrelating experience for 0 frames... |
|
[2025-03-08 08:44:18,445][03214] Decorrelating experience for 0 frames... |
|
[2025-03-08 08:44:18,446][03212] Decorrelating experience for 0 frames... |
|
[2025-03-08 08:44:19,114][03216] Decorrelating experience for 32 frames... |
|
[2025-03-08 08:44:19,741][03214] Decorrelating experience for 32 frames... |
|
[2025-03-08 08:44:19,745][03215] Decorrelating experience for 32 frames... |
|
[2025-03-08 08:44:19,743][03211] Decorrelating experience for 32 frames... |
|
[2025-03-08 08:44:19,820][03216] Decorrelating experience for 64 frames... |
|
[2025-03-08 08:44:20,549][03212] Decorrelating experience for 32 frames... |
|
[2025-03-08 08:44:20,734][03216] Decorrelating experience for 96 frames... |
|
[2025-03-08 08:44:21,304][03217] Decorrelating experience for 0 frames... |
|
[2025-03-08 08:44:21,656][03215] Decorrelating experience for 64 frames... |
|
[2025-03-08 08:44:21,660][03214] Decorrelating experience for 64 frames... |
|
[2025-03-08 08:44:21,661][03211] Decorrelating experience for 64 frames... |
|
[2025-03-08 08:44:21,880][03218] Decorrelating experience for 32 frames... |
|
[2025-03-08 08:44:22,070][03212] Decorrelating experience for 64 frames... |
|
[2025-03-08 08:44:22,435][00316] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 2.4. Samples: 12. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-03-08 08:44:23,562][03218] Decorrelating experience for 64 frames... |
|
[2025-03-08 08:44:23,579][03217] Decorrelating experience for 32 frames... |
|
[2025-03-08 08:44:23,586][03212] Decorrelating experience for 96 frames... |
|
[2025-03-08 08:44:23,703][03214] Decorrelating experience for 96 frames... |
|
[2025-03-08 08:44:23,705][03211] Decorrelating experience for 96 frames... |
|
[2025-03-08 08:44:23,707][03215] Decorrelating experience for 96 frames... |
|
[2025-03-08 08:44:25,135][03218] Decorrelating experience for 96 frames... |
|
[2025-03-08 08:44:26,714][03217] Decorrelating experience for 64 frames... |
|
[2025-03-08 08:44:26,840][03197] Signal inference workers to stop experience collection... |
|
[2025-03-08 08:44:26,864][03210] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-03-08 08:44:27,227][03217] Decorrelating experience for 96 frames... |
|
[2025-03-08 08:44:27,435][00316] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 100.8. Samples: 1008. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-03-08 08:44:27,438][00316] Avg episode reward: [(0, '2.745')] |
|
[2025-03-08 08:44:28,806][03197] Signal inference workers to resume experience collection... |
|
[2025-03-08 08:44:28,807][03210] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-03-08 08:44:32,435][00316] Fps is (10 sec: 1638.4, 60 sec: 1092.6, 300 sec: 1092.6). Total num frames: 16384. Throughput: 0: 258.2. Samples: 3872. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:44:32,437][00316] Avg episode reward: [(0, '3.310')] |
|
[2025-03-08 08:44:37,435][00316] Fps is (10 sec: 3686.4, 60 sec: 1843.7, 300 sec: 1843.7). Total num frames: 36864. Throughput: 0: 466.0. Samples: 9318. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:44:37,439][00316] Avg episode reward: [(0, '3.852')] |
|
[2025-03-08 08:44:37,974][03210] Updated weights for policy 0, policy_version 10 (0.0098) |
|
[2025-03-08 08:44:42,435][00316] Fps is (10 sec: 4096.0, 60 sec: 2294.2, 300 sec: 2294.2). Total num frames: 57344. Throughput: 0: 483.1. Samples: 12076. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:44:42,439][00316] Avg episode reward: [(0, '4.435')] |
|
[2025-03-08 08:44:47,435][00316] Fps is (10 sec: 3686.4, 60 sec: 2458.0, 300 sec: 2458.0). Total num frames: 73728. Throughput: 0: 598.4. Samples: 17948. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:44:47,437][00316] Avg episode reward: [(0, '4.502')] |
|
[2025-03-08 08:44:49,329][03210] Updated weights for policy 0, policy_version 20 (0.0020) |
|
[2025-03-08 08:44:52,435][00316] Fps is (10 sec: 3686.4, 60 sec: 2692.0, 300 sec: 2692.0). Total num frames: 94208. Throughput: 0: 677.1. Samples: 23694. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:44:52,436][00316] Avg episode reward: [(0, '4.488')] |
|
[2025-03-08 08:44:57,435][00316] Fps is (10 sec: 4096.0, 60 sec: 2867.6, 300 sec: 2867.6). Total num frames: 114688. Throughput: 0: 667.7. Samples: 26704. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:44:57,439][00316] Avg episode reward: [(0, '4.703')] |
|
[2025-03-08 08:44:57,444][03197] Saving new best policy, reward=4.703! |
|
[2025-03-08 08:44:59,402][03210] Updated weights for policy 0, policy_version 30 (0.0015) |
|
[2025-03-08 08:45:02,436][00316] Fps is (10 sec: 3276.7, 60 sec: 2822.0, 300 sec: 2822.0). Total num frames: 126976. Throughput: 0: 713.1. Samples: 32084. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:45:02,443][00316] Avg episode reward: [(0, '4.611')] |
|
[2025-03-08 08:45:07,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3031.3, 300 sec: 3031.3). Total num frames: 151552. Throughput: 0: 846.3. Samples: 38096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:45:07,439][00316] Avg episode reward: [(0, '4.449')] |
|
[2025-03-08 08:45:09,726][03210] Updated weights for policy 0, policy_version 40 (0.0031) |
|
[2025-03-08 08:45:12,435][00316] Fps is (10 sec: 4505.7, 60 sec: 3128.1, 300 sec: 3128.1). Total num frames: 172032. Throughput: 0: 898.6. Samples: 41446. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:45:12,439][00316] Avg episode reward: [(0, '4.398')] |
|
[2025-03-08 08:45:17,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3140.5, 300 sec: 3140.5). Total num frames: 188416. Throughput: 0: 951.4. Samples: 46684. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:45:17,439][00316] Avg episode reward: [(0, '4.313')] |
|
[2025-03-08 08:45:21,083][03210] Updated weights for policy 0, policy_version 50 (0.0014) |
|
[2025-03-08 08:45:22,435][00316] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3151.0). Total num frames: 204800. Throughput: 0: 950.4. Samples: 52086. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:45:22,437][00316] Avg episode reward: [(0, '4.278')] |
|
[2025-03-08 08:45:27,435][00316] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3160.0). Total num frames: 221184. Throughput: 0: 937.0. Samples: 54242. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:45:27,439][00316] Avg episode reward: [(0, '4.324')] |
|
[2025-03-08 08:45:32,435][00316] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3113.2). Total num frames: 233472. Throughput: 0: 901.2. Samples: 58500. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:45:32,436][00316] Avg episode reward: [(0, '4.341')] |
|
[2025-03-08 08:45:35,009][03210] Updated weights for policy 0, policy_version 60 (0.0026) |
|
[2025-03-08 08:45:37,435][00316] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3174.6). Total num frames: 253952. Throughput: 0: 895.5. Samples: 63992. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:45:37,437][00316] Avg episode reward: [(0, '4.573')] |
|
[2025-03-08 08:45:42,441][00316] Fps is (10 sec: 4093.7, 60 sec: 3617.8, 300 sec: 3228.6). Total num frames: 274432. Throughput: 0: 901.2. Samples: 67264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:45:42,442][00316] Avg episode reward: [(0, '4.372')] |
|
[2025-03-08 08:45:44,856][03210] Updated weights for policy 0, policy_version 70 (0.0020) |
|
[2025-03-08 08:45:47,436][00316] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3231.5). Total num frames: 290816. Throughput: 0: 903.6. Samples: 72748. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:45:47,437][00316] Avg episode reward: [(0, '4.297')] |
|
[2025-03-08 08:45:47,444][03197] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000071_290816.pth... |
|
[2025-03-08 08:45:52,435][00316] Fps is (10 sec: 4098.4, 60 sec: 3686.4, 300 sec: 3320.1). Total num frames: 315392. Throughput: 0: 897.8. Samples: 78498. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:45:52,437][00316] Avg episode reward: [(0, '4.301')] |
|
[2025-03-08 08:45:55,262][03210] Updated weights for policy 0, policy_version 80 (0.0025) |
|
[2025-03-08 08:45:57,435][00316] Fps is (10 sec: 4505.8, 60 sec: 3686.4, 300 sec: 3358.9). Total num frames: 335872. Throughput: 0: 899.6. Samples: 81930. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-08 08:45:57,441][00316] Avg episode reward: [(0, '4.517')] |
|
[2025-03-08 08:46:02,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3355.0). Total num frames: 352256. Throughput: 0: 901.2. Samples: 87240. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:46:02,437][00316] Avg episode reward: [(0, '4.486')] |
|
[2025-03-08 08:46:06,090][03210] Updated weights for policy 0, policy_version 90 (0.0015) |
|
[2025-03-08 08:46:07,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3388.7). Total num frames: 372736. Throughput: 0: 919.4. Samples: 93458. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:46:07,439][00316] Avg episode reward: [(0, '4.588')] |
|
[2025-03-08 08:46:12,438][00316] Fps is (10 sec: 4095.0, 60 sec: 3686.2, 300 sec: 3419.3). Total num frames: 393216. Throughput: 0: 946.9. Samples: 96856. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:46:12,441][00316] Avg episode reward: [(0, '4.444')] |
|
[2025-03-08 08:46:16,865][03210] Updated weights for policy 0, policy_version 100 (0.0014) |
|
[2025-03-08 08:46:17,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3413.5). Total num frames: 409600. Throughput: 0: 960.1. Samples: 101704. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:46:17,439][00316] Avg episode reward: [(0, '4.406')] |
|
[2025-03-08 08:46:22,437][00316] Fps is (10 sec: 3686.8, 60 sec: 3754.6, 300 sec: 3440.7). Total num frames: 430080. Throughput: 0: 977.3. Samples: 107972. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:46:22,438][00316] Avg episode reward: [(0, '4.546')] |
|
[2025-03-08 08:46:26,301][03210] Updated weights for policy 0, policy_version 110 (0.0013) |
|
[2025-03-08 08:46:27,436][00316] Fps is (10 sec: 4095.6, 60 sec: 3822.9, 300 sec: 3465.9). Total num frames: 450560. Throughput: 0: 977.5. Samples: 111246. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:46:27,440][00316] Avg episode reward: [(0, '4.394')] |
|
[2025-03-08 08:46:32,435][00316] Fps is (10 sec: 3687.0, 60 sec: 3891.2, 300 sec: 3459.0). Total num frames: 466944. Throughput: 0: 963.9. Samples: 116124. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:46:32,440][00316] Avg episode reward: [(0, '4.486')] |
|
[2025-03-08 08:46:37,260][03210] Updated weights for policy 0, policy_version 120 (0.0022) |
|
[2025-03-08 08:46:37,435][00316] Fps is (10 sec: 4096.4, 60 sec: 3959.5, 300 sec: 3511.0). Total num frames: 491520. Throughput: 0: 986.8. Samples: 122904. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:46:37,439][00316] Avg episode reward: [(0, '4.719')] |
|
[2025-03-08 08:46:37,446][03197] Saving new best policy, reward=4.719! |
|
[2025-03-08 08:46:42,439][00316] Fps is (10 sec: 4094.5, 60 sec: 3891.3, 300 sec: 3502.8). Total num frames: 507904. Throughput: 0: 984.1. Samples: 126218. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:46:42,440][00316] Avg episode reward: [(0, '4.575')] |
|
[2025-03-08 08:46:47,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3522.7). Total num frames: 528384. Throughput: 0: 968.0. Samples: 130802. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:46:47,437][00316] Avg episode reward: [(0, '4.444')] |
|
[2025-03-08 08:46:48,223][03210] Updated weights for policy 0, policy_version 130 (0.0030) |
|
[2025-03-08 08:46:52,435][00316] Fps is (10 sec: 4097.5, 60 sec: 3891.2, 300 sec: 3541.2). Total num frames: 548864. Throughput: 0: 982.0. Samples: 137646. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:46:52,440][00316] Avg episode reward: [(0, '4.439')] |
|
[2025-03-08 08:46:57,438][00316] Fps is (10 sec: 4094.8, 60 sec: 3891.0, 300 sec: 3558.4). Total num frames: 569344. Throughput: 0: 979.4. Samples: 140930. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:46:57,440][00316] Avg episode reward: [(0, '4.441')] |
|
[2025-03-08 08:46:58,718][03210] Updated weights for policy 0, policy_version 140 (0.0017) |
|
[2025-03-08 08:47:02,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3550.0). Total num frames: 585728. Throughput: 0: 979.7. Samples: 145790. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:47:02,440][00316] Avg episode reward: [(0, '4.448')] |
|
[2025-03-08 08:47:07,436][00316] Fps is (10 sec: 4097.1, 60 sec: 3959.5, 300 sec: 3590.1). Total num frames: 610304. Throughput: 0: 988.4. Samples: 152448. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:47:07,440][00316] Avg episode reward: [(0, '4.409')] |
|
[2025-03-08 08:47:08,244][03210] Updated weights for policy 0, policy_version 150 (0.0016) |
|
[2025-03-08 08:47:12,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.4, 300 sec: 3581.2). Total num frames: 626688. Throughput: 0: 988.8. Samples: 155742. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:47:12,437][00316] Avg episode reward: [(0, '4.380')] |
|
[2025-03-08 08:47:17,435][00316] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3595.5). Total num frames: 647168. Throughput: 0: 986.8. Samples: 160530. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:47:17,441][00316] Avg episode reward: [(0, '4.474')] |
|
[2025-03-08 08:47:19,056][03210] Updated weights for policy 0, policy_version 160 (0.0026) |
|
[2025-03-08 08:47:22,435][00316] Fps is (10 sec: 4095.9, 60 sec: 3959.6, 300 sec: 3609.0). Total num frames: 667648. Throughput: 0: 985.7. Samples: 167260. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:47:22,441][00316] Avg episode reward: [(0, '4.630')] |
|
[2025-03-08 08:47:27,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3600.3). Total num frames: 684032. Throughput: 0: 980.5. Samples: 170338. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:47:27,437][00316] Avg episode reward: [(0, '4.464')] |
|
[2025-03-08 08:47:30,123][03210] Updated weights for policy 0, policy_version 170 (0.0013) |
|
[2025-03-08 08:47:32,435][00316] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3613.0). Total num frames: 704512. Throughput: 0: 991.6. Samples: 175426. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:47:32,437][00316] Avg episode reward: [(0, '4.386')] |
|
[2025-03-08 08:47:37,435][00316] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3645.5). Total num frames: 729088. Throughput: 0: 989.8. Samples: 182188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:47:37,439][00316] Avg episode reward: [(0, '4.319')] |
|
[2025-03-08 08:47:39,455][03210] Updated weights for policy 0, policy_version 180 (0.0024) |
|
[2025-03-08 08:47:42,436][00316] Fps is (10 sec: 4095.7, 60 sec: 3959.6, 300 sec: 3636.5). Total num frames: 745472. Throughput: 0: 980.1. Samples: 185034. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:47:42,442][00316] Avg episode reward: [(0, '4.426')] |
|
[2025-03-08 08:47:47,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3647.5). Total num frames: 765952. Throughput: 0: 987.8. Samples: 190240. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:47:47,437][00316] Avg episode reward: [(0, '4.698')] |
|
[2025-03-08 08:47:47,443][03197] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000187_765952.pth... |
|
[2025-03-08 08:47:50,027][03210] Updated weights for policy 0, policy_version 190 (0.0019) |
|
[2025-03-08 08:47:52,435][00316] Fps is (10 sec: 4096.3, 60 sec: 3959.5, 300 sec: 3657.9). Total num frames: 786432. Throughput: 0: 988.8. Samples: 196944. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:47:52,439][00316] Avg episode reward: [(0, '4.948')] |
|
[2025-03-08 08:47:52,443][03197] Saving new best policy, reward=4.948! |
|
[2025-03-08 08:47:57,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.4, 300 sec: 3649.2). Total num frames: 802816. Throughput: 0: 973.3. Samples: 199540. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:47:57,438][00316] Avg episode reward: [(0, '4.579')] |
|
[2025-03-08 08:48:01,000][03210] Updated weights for policy 0, policy_version 200 (0.0027) |
|
[2025-03-08 08:48:02,438][00316] Fps is (10 sec: 3685.4, 60 sec: 3959.3, 300 sec: 3659.1). Total num frames: 823296. Throughput: 0: 988.6. Samples: 205018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-08 08:48:02,442][00316] Avg episode reward: [(0, '4.314')] |
|
[2025-03-08 08:48:07,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3650.9). Total num frames: 839680. Throughput: 0: 943.1. Samples: 209700. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:48:07,442][00316] Avg episode reward: [(0, '4.230')] |
|
[2025-03-08 08:48:12,435][00316] Fps is (10 sec: 2868.0, 60 sec: 3754.7, 300 sec: 3625.5). Total num frames: 851968. Throughput: 0: 929.7. Samples: 212176. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:48:12,437][00316] Avg episode reward: [(0, '4.188')] |
|
[2025-03-08 08:48:13,831][03210] Updated weights for policy 0, policy_version 210 (0.0017) |
|
[2025-03-08 08:48:17,435][00316] Fps is (10 sec: 3276.7, 60 sec: 3754.7, 300 sec: 3635.3). Total num frames: 872448. Throughput: 0: 939.3. Samples: 217696. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:48:17,440][00316] Avg episode reward: [(0, '4.463')] |
|
[2025-03-08 08:48:22,435][00316] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3661.4). Total num frames: 897024. Throughput: 0: 939.7. Samples: 224474. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-03-08 08:48:22,439][00316] Avg episode reward: [(0, '4.696')] |
|
[2025-03-08 08:48:22,919][03210] Updated weights for policy 0, policy_version 220 (0.0017) |
|
[2025-03-08 08:48:27,435][00316] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3653.7). Total num frames: 913408. Throughput: 0: 927.5. Samples: 226772. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:48:27,436][00316] Avg episode reward: [(0, '4.619')] |
|
[2025-03-08 08:48:32,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3662.4). Total num frames: 933888. Throughput: 0: 943.8. Samples: 232710. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:48:32,440][00316] Avg episode reward: [(0, '4.683')] |
|
[2025-03-08 08:48:33,514][03210] Updated weights for policy 0, policy_version 230 (0.0023) |
|
[2025-03-08 08:48:37,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3670.7). Total num frames: 954368. Throughput: 0: 944.4. Samples: 239440. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-03-08 08:48:37,439][00316] Avg episode reward: [(0, '4.802')] |
|
[2025-03-08 08:48:42,436][00316] Fps is (10 sec: 3686.3, 60 sec: 3754.7, 300 sec: 3663.3). Total num frames: 970752. Throughput: 0: 932.0. Samples: 241480. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:48:42,437][00316] Avg episode reward: [(0, '4.888')] |
|
[2025-03-08 08:48:44,608][03210] Updated weights for policy 0, policy_version 240 (0.0012) |
|
[2025-03-08 08:48:47,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3686.5). Total num frames: 995328. Throughput: 0: 944.2. Samples: 247504. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:48:47,437][00316] Avg episode reward: [(0, '4.924')] |
|
[2025-03-08 08:48:52,435][00316] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3679.0). Total num frames: 1011712. Throughput: 0: 981.2. Samples: 253854. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-03-08 08:48:52,441][00316] Avg episode reward: [(0, '4.685')] |
|
[2025-03-08 08:48:55,551][03210] Updated weights for policy 0, policy_version 250 (0.0028) |
|
[2025-03-08 08:48:57,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3686.5). Total num frames: 1032192. Throughput: 0: 971.6. Samples: 255896. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:48:57,440][00316] Avg episode reward: [(0, '4.602')] |
|
[2025-03-08 08:49:02,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3823.1, 300 sec: 3693.6). Total num frames: 1052672. Throughput: 0: 993.5. Samples: 262404. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:49:02,441][00316] Avg episode reward: [(0, '4.891')] |
|
[2025-03-08 08:49:04,741][03210] Updated weights for policy 0, policy_version 260 (0.0016) |
|
[2025-03-08 08:49:07,436][00316] Fps is (10 sec: 4095.7, 60 sec: 3891.2, 300 sec: 3700.6). Total num frames: 1073152. Throughput: 0: 975.2. Samples: 268358. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:49:07,437][00316] Avg episode reward: [(0, '4.862')] |
|
[2025-03-08 08:49:12,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3693.4). Total num frames: 1089536. Throughput: 0: 969.3. Samples: 270390. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:49:12,439][00316] Avg episode reward: [(0, '4.655')] |
|
[2025-03-08 08:49:15,667][03210] Updated weights for policy 0, policy_version 270 (0.0016) |
|
[2025-03-08 08:49:17,435][00316] Fps is (10 sec: 3686.6, 60 sec: 3959.5, 300 sec: 3762.8). Total num frames: 1110016. Throughput: 0: 981.9. Samples: 276894. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:49:17,438][00316] Avg episode reward: [(0, '4.422')] |
|
[2025-03-08 08:49:22,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1130496. Throughput: 0: 964.5. Samples: 282842. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-03-08 08:49:22,437][00316] Avg episode reward: [(0, '4.470')] |
|
[2025-03-08 08:49:26,732][03210] Updated weights for policy 0, policy_version 280 (0.0021) |
|
[2025-03-08 08:49:27,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1146880. Throughput: 0: 968.1. Samples: 285042. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:49:27,443][00316] Avg episode reward: [(0, '4.722')] |
|
[2025-03-08 08:49:32,435][00316] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 1171456. Throughput: 0: 984.0. Samples: 291782. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:49:32,437][00316] Avg episode reward: [(0, '4.785')] |
|
[2025-03-08 08:49:36,631][03210] Updated weights for policy 0, policy_version 290 (0.0025) |
|
[2025-03-08 08:49:37,436][00316] Fps is (10 sec: 4095.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1187840. Throughput: 0: 970.1. Samples: 297508. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:49:37,437][00316] Avg episode reward: [(0, '4.585')] |
|
[2025-03-08 08:49:42,437][00316] Fps is (10 sec: 3685.7, 60 sec: 3959.4, 300 sec: 3846.1). Total num frames: 1208320. Throughput: 0: 978.9. Samples: 299948. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:49:42,442][00316] Avg episode reward: [(0, '4.613')] |
|
[2025-03-08 08:49:46,739][03210] Updated weights for policy 0, policy_version 300 (0.0013) |
|
[2025-03-08 08:49:47,435][00316] Fps is (10 sec: 4096.2, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1228800. Throughput: 0: 981.1. Samples: 306554. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:49:47,439][00316] Avg episode reward: [(0, '4.482')] |
|
[2025-03-08 08:49:47,451][03197] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000300_1228800.pth... |
|
[2025-03-08 08:49:47,576][03197] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000071_290816.pth |
|
[2025-03-08 08:49:52,435][00316] Fps is (10 sec: 3687.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1245184. Throughput: 0: 968.7. Samples: 311948. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:49:52,439][00316] Avg episode reward: [(0, '4.401')] |
|
[2025-03-08 08:49:57,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1265664. Throughput: 0: 984.3. Samples: 314682. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:49:57,440][00316] Avg episode reward: [(0, '4.604')] |
|
[2025-03-08 08:49:57,644][03210] Updated weights for policy 0, policy_version 310 (0.0021) |
|
[2025-03-08 08:50:02,435][00316] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 1290240. Throughput: 0: 990.9. Samples: 321484. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:50:02,437][00316] Avg episode reward: [(0, '4.692')] |
|
[2025-03-08 08:50:07,436][00316] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1306624. Throughput: 0: 972.4. Samples: 326600. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:50:07,439][00316] Avg episode reward: [(0, '4.544')] |
|
[2025-03-08 08:50:08,407][03210] Updated weights for policy 0, policy_version 320 (0.0026) |
|
[2025-03-08 08:50:12,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 1327104. Throughput: 0: 989.5. Samples: 329570. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:50:12,437][00316] Avg episode reward: [(0, '4.556')] |
|
[2025-03-08 08:50:17,436][00316] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 1347584. Throughput: 0: 986.7. Samples: 336184. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:50:17,440][00316] Avg episode reward: [(0, '4.583')] |
|
[2025-03-08 08:50:17,655][03210] Updated weights for policy 0, policy_version 330 (0.0025) |
|
[2025-03-08 08:50:22,440][00316] Fps is (10 sec: 3684.7, 60 sec: 3890.9, 300 sec: 3873.8). Total num frames: 1363968. Throughput: 0: 972.8. Samples: 341288. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:50:22,442][00316] Avg episode reward: [(0, '4.604')] |
|
[2025-03-08 08:50:27,435][00316] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1384448. Throughput: 0: 988.7. Samples: 344436. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:50:27,437][00316] Avg episode reward: [(0, '4.477')] |
|
[2025-03-08 08:50:28,529][03210] Updated weights for policy 0, policy_version 340 (0.0029) |
|
[2025-03-08 08:50:32,438][00316] Fps is (10 sec: 4506.4, 60 sec: 3959.3, 300 sec: 3915.5). Total num frames: 1409024. Throughput: 0: 992.9. Samples: 351238. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:50:32,440][00316] Avg episode reward: [(0, '4.489')] |
|
[2025-03-08 08:50:37,435][00316] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3873.9). Total num frames: 1417216. Throughput: 0: 964.5. Samples: 355350. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:50:37,436][00316] Avg episode reward: [(0, '4.500')] |
|
[2025-03-08 08:50:41,008][03210] Updated weights for policy 0, policy_version 350 (0.0013) |
|
[2025-03-08 08:50:42,435][00316] Fps is (10 sec: 2868.0, 60 sec: 3823.0, 300 sec: 3887.7). Total num frames: 1437696. Throughput: 0: 951.1. Samples: 357480. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:50:42,440][00316] Avg episode reward: [(0, '4.582')] |
|
[2025-03-08 08:50:47,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 1458176. Throughput: 0: 947.3. Samples: 364114. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:50:47,440][00316] Avg episode reward: [(0, '4.634')] |
|
[2025-03-08 08:50:50,877][03210] Updated weights for policy 0, policy_version 360 (0.0019) |
|
[2025-03-08 08:50:52,435][00316] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1478656. Throughput: 0: 957.8. Samples: 369700. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:50:52,439][00316] Avg episode reward: [(0, '4.900')] |
|
[2025-03-08 08:50:57,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1499136. Throughput: 0: 950.0. Samples: 372322. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:50:57,439][00316] Avg episode reward: [(0, '5.024')] |
|
[2025-03-08 08:50:57,444][03197] Saving new best policy, reward=5.024! |
|
[2025-03-08 08:51:01,061][03210] Updated weights for policy 0, policy_version 370 (0.0013) |
|
[2025-03-08 08:51:02,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 1519616. Throughput: 0: 953.2. Samples: 379078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:51:02,440][00316] Avg episode reward: [(0, '5.124')] |
|
[2025-03-08 08:51:02,443][03197] Saving new best policy, reward=5.124! |
|
[2025-03-08 08:51:07,438][00316] Fps is (10 sec: 3685.5, 60 sec: 3822.8, 300 sec: 3873.8). Total num frames: 1536000. Throughput: 0: 953.4. Samples: 384188. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:51:07,440][00316] Avg episode reward: [(0, '4.946')] |
|
[2025-03-08 08:51:11,870][03210] Updated weights for policy 0, policy_version 380 (0.0012) |
|
[2025-03-08 08:51:12,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 1556480. Throughput: 0: 950.8. Samples: 387220. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:51:12,441][00316] Avg episode reward: [(0, '4.952')] |
|
[2025-03-08 08:51:17,435][00316] Fps is (10 sec: 4506.7, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 1581056. Throughput: 0: 946.2. Samples: 393812. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:51:17,437][00316] Avg episode reward: [(0, '4.964')] |
|
[2025-03-08 08:51:22,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3823.2, 300 sec: 3873.9). Total num frames: 1593344. Throughput: 0: 963.5. Samples: 398708. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:51:22,438][00316] Avg episode reward: [(0, '5.084')] |
|
[2025-03-08 08:51:22,844][03210] Updated weights for policy 0, policy_version 390 (0.0015) |
|
[2025-03-08 08:51:27,435][00316] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 1613824. Throughput: 0: 986.9. Samples: 401890. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:51:27,437][00316] Avg episode reward: [(0, '5.489')] |
|
[2025-03-08 08:51:27,519][03197] Saving new best policy, reward=5.489! |
|
[2025-03-08 08:51:32,111][03210] Updated weights for policy 0, policy_version 400 (0.0021) |
|
[2025-03-08 08:51:32,435][00316] Fps is (10 sec: 4505.6, 60 sec: 3823.1, 300 sec: 3887.7). Total num frames: 1638400. Throughput: 0: 990.9. Samples: 408706. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:51:32,437][00316] Avg episode reward: [(0, '5.247')] |
|
[2025-03-08 08:51:37,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.8). Total num frames: 1654784. Throughput: 0: 972.3. Samples: 413452. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:51:37,437][00316] Avg episode reward: [(0, '5.116')] |
|
[2025-03-08 08:51:42,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1675264. Throughput: 0: 988.6. Samples: 416808. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:51:42,437][00316] Avg episode reward: [(0, '5.172')] |
|
[2025-03-08 08:51:42,898][03210] Updated weights for policy 0, policy_version 410 (0.0015) |
|
[2025-03-08 08:51:47,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1695744. Throughput: 0: 986.9. Samples: 423488. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2025-03-08 08:51:47,439][00316] Avg episode reward: [(0, '5.386')] |
|
[2025-03-08 08:51:47,450][03197] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000414_1695744.pth... |
|
[2025-03-08 08:51:47,616][03197] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000187_765952.pth |
|
[2025-03-08 08:51:52,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 1712128. Throughput: 0: 979.6. Samples: 428266. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:51:52,441][00316] Avg episode reward: [(0, '5.331')] |
|
[2025-03-08 08:51:53,846][03210] Updated weights for policy 0, policy_version 420 (0.0019) |
|
[2025-03-08 08:51:57,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1732608. Throughput: 0: 986.0. Samples: 431588. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:51:57,439][00316] Avg episode reward: [(0, '5.414')] |
|
[2025-03-08 08:52:02,443][00316] Fps is (10 sec: 4092.9, 60 sec: 3890.7, 300 sec: 3873.7). Total num frames: 1753088. Throughput: 0: 990.5. Samples: 438390. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:52:02,444][00316] Avg episode reward: [(0, '5.391')] |
|
[2025-03-08 08:52:04,261][03210] Updated weights for policy 0, policy_version 430 (0.0014) |
|
[2025-03-08 08:52:07,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.4, 300 sec: 3873.8). Total num frames: 1769472. Throughput: 0: 987.0. Samples: 443122. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:52:07,437][00316] Avg episode reward: [(0, '5.576')] |
|
[2025-03-08 08:52:07,444][03197] Saving new best policy, reward=5.576! |
|
[2025-03-08 08:52:12,435][00316] Fps is (10 sec: 4099.1, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1794048. Throughput: 0: 991.2. Samples: 446496. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:52:12,437][00316] Avg episode reward: [(0, '5.459')] |
|
[2025-03-08 08:52:14,042][03210] Updated weights for policy 0, policy_version 440 (0.0024) |
|
[2025-03-08 08:52:17,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 1810432. Throughput: 0: 981.2. Samples: 452858. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:52:17,438][00316] Avg episode reward: [(0, '5.706')] |
|
[2025-03-08 08:52:17,464][03197] Saving new best policy, reward=5.706! |
|
[2025-03-08 08:52:22,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1830912. Throughput: 0: 987.7. Samples: 457900. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:52:22,437][00316] Avg episode reward: [(0, '6.085')] |
|
[2025-03-08 08:52:22,445][03197] Saving new best policy, reward=6.085! |
|
[2025-03-08 08:52:24,752][03210] Updated weights for policy 0, policy_version 450 (0.0018) |
|
[2025-03-08 08:52:27,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1851392. Throughput: 0: 986.8. Samples: 461214. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:52:27,437][00316] Avg episode reward: [(0, '6.046')] |
|
[2025-03-08 08:52:32,436][00316] Fps is (10 sec: 4095.6, 60 sec: 3891.1, 300 sec: 3873.8). Total num frames: 1871872. Throughput: 0: 977.5. Samples: 467476. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:52:32,441][00316] Avg episode reward: [(0, '6.008')] |
|
[2025-03-08 08:52:35,546][03210] Updated weights for policy 0, policy_version 460 (0.0016) |
|
[2025-03-08 08:52:37,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1892352. Throughput: 0: 990.5. Samples: 472840. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:52:37,439][00316] Avg episode reward: [(0, '6.324')] |
|
[2025-03-08 08:52:37,446][03197] Saving new best policy, reward=6.324! |
|
[2025-03-08 08:52:42,435][00316] Fps is (10 sec: 4096.4, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1912832. Throughput: 0: 991.3. Samples: 476196. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:52:42,437][00316] Avg episode reward: [(0, '6.639')] |
|
[2025-03-08 08:52:42,441][03197] Saving new best policy, reward=6.639! |
|
[2025-03-08 08:52:45,209][03210] Updated weights for policy 0, policy_version 470 (0.0027) |
|
[2025-03-08 08:52:47,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1929216. Throughput: 0: 969.3. Samples: 482000. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:52:47,438][00316] Avg episode reward: [(0, '6.821')] |
|
[2025-03-08 08:52:47,446][03197] Saving new best policy, reward=6.821! |
|
[2025-03-08 08:52:52,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1949696. Throughput: 0: 986.0. Samples: 487490. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:52:52,438][00316] Avg episode reward: [(0, '7.246')] |
|
[2025-03-08 08:52:52,440][03197] Saving new best policy, reward=7.246! |
|
[2025-03-08 08:52:55,838][03210] Updated weights for policy 0, policy_version 480 (0.0021) |
|
[2025-03-08 08:52:57,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.8). Total num frames: 1970176. Throughput: 0: 985.4. Samples: 490840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:52:57,437][00316] Avg episode reward: [(0, '7.136')] |
|
[2025-03-08 08:53:02,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.7, 300 sec: 3887.7). Total num frames: 1986560. Throughput: 0: 968.2. Samples: 496426. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:53:02,436][00316] Avg episode reward: [(0, '6.366')] |
|
[2025-03-08 08:53:06,848][03210] Updated weights for policy 0, policy_version 490 (0.0020) |
|
[2025-03-08 08:53:07,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 2007040. Throughput: 0: 986.9. Samples: 502310. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:53:07,437][00316] Avg episode reward: [(0, '6.671')] |
|
[2025-03-08 08:53:12,437][00316] Fps is (10 sec: 3685.9, 60 sec: 3822.8, 300 sec: 3901.6). Total num frames: 2023424. Throughput: 0: 978.9. Samples: 505268. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:53:12,441][00316] Avg episode reward: [(0, '6.716')] |
|
[2025-03-08 08:53:17,435][00316] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 2039808. Throughput: 0: 925.0. Samples: 509100. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:53:17,437][00316] Avg episode reward: [(0, '6.773')] |
|
[2025-03-08 08:53:19,304][03210] Updated weights for policy 0, policy_version 500 (0.0018) |
|
[2025-03-08 08:53:22,435][00316] Fps is (10 sec: 3686.9, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 2060288. Throughput: 0: 943.3. Samples: 515288. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:53:22,437][00316] Avg episode reward: [(0, '6.387')] |
|
[2025-03-08 08:53:27,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 2080768. Throughput: 0: 942.6. Samples: 518612. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:53:27,436][00316] Avg episode reward: [(0, '6.773')] |
|
[2025-03-08 08:53:29,215][03210] Updated weights for policy 0, policy_version 510 (0.0013) |
|
[2025-03-08 08:53:32,435][00316] Fps is (10 sec: 3686.3, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 2097152. Throughput: 0: 928.0. Samples: 523762. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:53:32,437][00316] Avg episode reward: [(0, '6.994')] |
|
[2025-03-08 08:53:37,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 2121728. Throughput: 0: 950.4. Samples: 530256. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-03-08 08:53:37,437][00316] Avg episode reward: [(0, '7.047')] |
|
[2025-03-08 08:53:39,088][03210] Updated weights for policy 0, policy_version 520 (0.0017) |
|
[2025-03-08 08:53:42,440][00316] Fps is (10 sec: 4094.0, 60 sec: 3754.4, 300 sec: 3873.8). Total num frames: 2138112. Throughput: 0: 951.4. Samples: 533658. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:53:42,442][00316] Avg episode reward: [(0, '7.449')] |
|
[2025-03-08 08:53:42,472][03197] Saving new best policy, reward=7.449! |
|
[2025-03-08 08:53:47,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 2158592. Throughput: 0: 930.1. Samples: 538282. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:53:47,440][00316] Avg episode reward: [(0, '7.954')] |
|
[2025-03-08 08:53:47,450][03197] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000527_2158592.pth... |
|
[2025-03-08 08:53:47,581][03197] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000300_1228800.pth |
|
[2025-03-08 08:53:47,597][03197] Saving new best policy, reward=7.954! |
|
[2025-03-08 08:53:50,338][03210] Updated weights for policy 0, policy_version 530 (0.0018) |
|
[2025-03-08 08:53:52,435][00316] Fps is (10 sec: 4098.1, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 2179072. Throughput: 0: 947.9. Samples: 544966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-08 08:53:52,439][00316] Avg episode reward: [(0, '7.714')] |
|
[2025-03-08 08:53:57,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 2199552. Throughput: 0: 954.6. Samples: 548224. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:53:57,441][00316] Avg episode reward: [(0, '7.791')] |
|
[2025-03-08 08:54:00,955][03210] Updated weights for policy 0, policy_version 540 (0.0019) |
|
[2025-03-08 08:54:02,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3873.9). Total num frames: 2215936. Throughput: 0: 979.1. Samples: 553160. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:54:02,440][00316] Avg episode reward: [(0, '7.797')] |
|
[2025-03-08 08:54:07,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2240512. Throughput: 0: 991.3. Samples: 559898. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:54:07,440][00316] Avg episode reward: [(0, '8.429')] |
|
[2025-03-08 08:54:07,448][03197] Saving new best policy, reward=8.429! |
|
[2025-03-08 08:54:10,513][03210] Updated weights for policy 0, policy_version 550 (0.0019) |
|
[2025-03-08 08:54:12,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.3, 300 sec: 3887.7). Total num frames: 2256896. Throughput: 0: 992.4. Samples: 563268. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-08 08:54:12,439][00316] Avg episode reward: [(0, '8.876')] |
|
[2025-03-08 08:54:12,440][03197] Saving new best policy, reward=8.876! |
|
[2025-03-08 08:54:17,435][00316] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2273280. Throughput: 0: 979.1. Samples: 567822. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:54:17,437][00316] Avg episode reward: [(0, '8.776')] |
|
[2025-03-08 08:54:21,046][03210] Updated weights for policy 0, policy_version 560 (0.0030) |
|
[2025-03-08 08:54:22,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2297856. Throughput: 0: 984.8. Samples: 574574. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:54:22,437][00316] Avg episode reward: [(0, '8.068')] |
|
[2025-03-08 08:54:27,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2314240. Throughput: 0: 975.0. Samples: 577528. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:54:27,437][00316] Avg episode reward: [(0, '7.874')] |
|
[2025-03-08 08:54:32,208][03210] Updated weights for policy 0, policy_version 570 (0.0019) |
|
[2025-03-08 08:54:32,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2334720. Throughput: 0: 986.5. Samples: 582676. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:54:32,436][00316] Avg episode reward: [(0, '8.410')] |
|
[2025-03-08 08:54:37,436][00316] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3887.8). Total num frames: 2355200. Throughput: 0: 986.5. Samples: 589358. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:54:37,441][00316] Avg episode reward: [(0, '8.874')] |
|
[2025-03-08 08:54:42,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.5, 300 sec: 3873.8). Total num frames: 2371584. Throughput: 0: 975.5. Samples: 592122. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:54:42,437][00316] Avg episode reward: [(0, '10.073')] |
|
[2025-03-08 08:54:42,440][03197] Saving new best policy, reward=10.073! |
|
[2025-03-08 08:54:43,019][03210] Updated weights for policy 0, policy_version 580 (0.0025) |
|
[2025-03-08 08:54:47,436][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2392064. Throughput: 0: 984.2. Samples: 597450. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:54:47,437][00316] Avg episode reward: [(0, '10.748')] |
|
[2025-03-08 08:54:47,445][03197] Saving new best policy, reward=10.748! |
|
[2025-03-08 08:54:52,285][03210] Updated weights for policy 0, policy_version 590 (0.0016) |
|
[2025-03-08 08:54:52,436][00316] Fps is (10 sec: 4505.3, 60 sec: 3959.4, 300 sec: 3901.6). Total num frames: 2416640. Throughput: 0: 980.3. Samples: 604010. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:54:52,437][00316] Avg episode reward: [(0, '11.268')] |
|
[2025-03-08 08:54:52,440][03197] Saving new best policy, reward=11.268! |
|
[2025-03-08 08:54:57,435][00316] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 2428928. Throughput: 0: 956.0. Samples: 606290. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-03-08 08:54:57,437][00316] Avg episode reward: [(0, '12.665')] |
|
[2025-03-08 08:54:57,442][03197] Saving new best policy, reward=12.665! |
|
[2025-03-08 08:55:02,435][00316] Fps is (10 sec: 3686.7, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2453504. Throughput: 0: 982.1. Samples: 612016. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:55:02,437][00316] Avg episode reward: [(0, '12.079')] |
|
[2025-03-08 08:55:03,422][03210] Updated weights for policy 0, policy_version 600 (0.0015) |
|
[2025-03-08 08:55:07,435][00316] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2473984. Throughput: 0: 979.3. Samples: 618644. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:55:07,442][00316] Avg episode reward: [(0, '11.462')] |
|
[2025-03-08 08:55:12,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2490368. Throughput: 0: 958.8. Samples: 620674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:55:12,439][00316] Avg episode reward: [(0, '10.237')] |
|
[2025-03-08 08:55:14,260][03210] Updated weights for policy 0, policy_version 610 (0.0015) |
|
[2025-03-08 08:55:17,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.8). Total num frames: 2510848. Throughput: 0: 979.2. Samples: 626742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:55:17,437][00316] Avg episode reward: [(0, '9.612')] |
|
[2025-03-08 08:55:22,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2531328. Throughput: 0: 970.5. Samples: 633030. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:55:22,440][00316] Avg episode reward: [(0, '9.029')] |
|
[2025-03-08 08:55:25,143][03210] Updated weights for policy 0, policy_version 620 (0.0030) |
|
[2025-03-08 08:55:27,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2547712. Throughput: 0: 954.6. Samples: 635078. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:55:27,439][00316] Avg episode reward: [(0, '8.973')] |
|
[2025-03-08 08:55:32,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2568192. Throughput: 0: 981.5. Samples: 641616. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-08 08:55:32,440][00316] Avg episode reward: [(0, '9.676')] |
|
[2025-03-08 08:55:34,294][03210] Updated weights for policy 0, policy_version 630 (0.0022) |
|
[2025-03-08 08:55:37,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2588672. Throughput: 0: 967.8. Samples: 647560. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:55:37,438][00316] Avg episode reward: [(0, '10.431')] |
|
[2025-03-08 08:55:42,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2605056. Throughput: 0: 966.9. Samples: 649802. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:55:42,437][00316] Avg episode reward: [(0, '11.260')] |
|
[2025-03-08 08:55:46,755][03210] Updated weights for policy 0, policy_version 640 (0.0027) |
|
[2025-03-08 08:55:47,435][00316] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3873.8). Total num frames: 2621440. Throughput: 0: 956.7. Samples: 655068. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:55:47,439][00316] Avg episode reward: [(0, '11.342')] |
|
[2025-03-08 08:55:47,446][03197] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000640_2621440.pth... |
|
[2025-03-08 08:55:47,571][03197] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000414_1695744.pth |
|
[2025-03-08 08:55:52,439][00316] Fps is (10 sec: 3275.6, 60 sec: 3686.2, 300 sec: 3859.9). Total num frames: 2637824. Throughput: 0: 920.5. Samples: 660070. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:55:52,440][00316] Avg episode reward: [(0, '11.120')] |
|
[2025-03-08 08:55:57,436][00316] Fps is (10 sec: 3686.2, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 2658304. Throughput: 0: 935.7. Samples: 662782. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:55:57,437][00316] Avg episode reward: [(0, '12.015')] |
|
[2025-03-08 08:55:57,714][03210] Updated weights for policy 0, policy_version 650 (0.0015) |
|
[2025-03-08 08:56:02,436][00316] Fps is (10 sec: 4507.1, 60 sec: 3822.9, 300 sec: 3887.8). Total num frames: 2682880. Throughput: 0: 949.4. Samples: 669464. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:56:02,437][00316] Avg episode reward: [(0, '12.689')] |
|
[2025-03-08 08:56:02,441][03197] Saving new best policy, reward=12.689! |
|
[2025-03-08 08:56:07,435][00316] Fps is (10 sec: 3686.6, 60 sec: 3686.4, 300 sec: 3860.0). Total num frames: 2695168. Throughput: 0: 921.6. Samples: 674502. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:56:07,436][00316] Avg episode reward: [(0, '12.986')] |
|
[2025-03-08 08:56:07,446][03197] Saving new best policy, reward=12.986! |
|
[2025-03-08 08:56:08,695][03210] Updated weights for policy 0, policy_version 660 (0.0020) |
|
[2025-03-08 08:56:12,435][00316] Fps is (10 sec: 3686.6, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 2719744. Throughput: 0: 942.9. Samples: 677508. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:56:12,437][00316] Avg episode reward: [(0, '13.150')] |
|
[2025-03-08 08:56:12,440][03197] Saving new best policy, reward=13.150! |
|
[2025-03-08 08:56:17,435][00316] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 2740224. Throughput: 0: 946.3. Samples: 684200. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:56:17,443][00316] Avg episode reward: [(0, '13.780')] |
|
[2025-03-08 08:56:17,450][03197] Saving new best policy, reward=13.780! |
|
[2025-03-08 08:56:18,534][03210] Updated weights for policy 0, policy_version 670 (0.0027) |
|
[2025-03-08 08:56:22,435][00316] Fps is (10 sec: 3276.7, 60 sec: 3686.4, 300 sec: 3860.0). Total num frames: 2752512. Throughput: 0: 918.4. Samples: 688886. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:56:22,445][00316] Avg episode reward: [(0, '13.457')] |
|
[2025-03-08 08:56:27,436][00316] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 2777088. Throughput: 0: 942.5. Samples: 692216. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:56:27,441][00316] Avg episode reward: [(0, '14.667')] |
|
[2025-03-08 08:56:27,450][03197] Saving new best policy, reward=14.667! |
|
[2025-03-08 08:56:28,747][03210] Updated weights for policy 0, policy_version 680 (0.0022) |
|
[2025-03-08 08:56:32,436][00316] Fps is (10 sec: 4505.3, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 2797568. Throughput: 0: 974.9. Samples: 698940. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:56:32,437][00316] Avg episode reward: [(0, '14.244')] |
|
[2025-03-08 08:56:37,435][00316] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 2813952. Throughput: 0: 973.6. Samples: 703878. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:56:37,439][00316] Avg episode reward: [(0, '13.772')] |
|
[2025-03-08 08:56:39,624][03210] Updated weights for policy 0, policy_version 690 (0.0017) |
|
[2025-03-08 08:56:42,435][00316] Fps is (10 sec: 4096.3, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2838528. Throughput: 0: 987.6. Samples: 707222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:56:42,438][00316] Avg episode reward: [(0, '14.164')] |
|
[2025-03-08 08:56:47,439][00316] Fps is (10 sec: 4094.6, 60 sec: 3891.0, 300 sec: 3873.8). Total num frames: 2854912. Throughput: 0: 989.6. Samples: 713998. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:56:47,440][00316] Avg episode reward: [(0, '12.914')] |
|
[2025-03-08 08:56:50,310][03210] Updated weights for policy 0, policy_version 700 (0.0030) |
|
[2025-03-08 08:56:52,436][00316] Fps is (10 sec: 3686.3, 60 sec: 3959.7, 300 sec: 3873.8). Total num frames: 2875392. Throughput: 0: 984.1. Samples: 718786. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:56:52,440][00316] Avg episode reward: [(0, '13.460')] |
|
[2025-03-08 08:56:57,436][00316] Fps is (10 sec: 4097.2, 60 sec: 3959.5, 300 sec: 3873.9). Total num frames: 2895872. Throughput: 0: 993.6. Samples: 722222. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:56:57,441][00316] Avg episode reward: [(0, '14.207')] |
|
[2025-03-08 08:56:59,518][03210] Updated weights for policy 0, policy_version 710 (0.0014) |
|
[2025-03-08 08:57:02,435][00316] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2916352. Throughput: 0: 987.5. Samples: 728638. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:57:02,439][00316] Avg episode reward: [(0, '15.306')] |
|
[2025-03-08 08:57:02,441][03197] Saving new best policy, reward=15.306! |
|
[2025-03-08 08:57:07,435][00316] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 2932736. Throughput: 0: 998.1. Samples: 733800. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:57:07,440][00316] Avg episode reward: [(0, '16.565')] |
|
[2025-03-08 08:57:07,505][03197] Saving new best policy, reward=16.565! |
|
[2025-03-08 08:57:10,436][03210] Updated weights for policy 0, policy_version 720 (0.0018) |
|
[2025-03-08 08:57:12,435][00316] Fps is (10 sec: 4095.9, 60 sec: 3959.4, 300 sec: 3887.7). Total num frames: 2957312. Throughput: 0: 998.0. Samples: 737124. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:57:12,441][00316] Avg episode reward: [(0, '17.470')] |
|
[2025-03-08 08:57:12,445][03197] Saving new best policy, reward=17.470! |
|
[2025-03-08 08:57:17,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2973696. Throughput: 0: 978.7. Samples: 742982. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:57:17,441][00316] Avg episode reward: [(0, '17.265')] |
|
[2025-03-08 08:57:21,412][03210] Updated weights for policy 0, policy_version 730 (0.0022) |
|
[2025-03-08 08:57:22,435][00316] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3873.8). Total num frames: 2994176. Throughput: 0: 990.4. Samples: 748444. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:57:22,441][00316] Avg episode reward: [(0, '16.649')] |
|
[2025-03-08 08:57:27,436][00316] Fps is (10 sec: 4095.9, 60 sec: 3959.5, 300 sec: 3873.9). Total num frames: 3014656. Throughput: 0: 991.7. Samples: 751850. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:57:27,441][00316] Avg episode reward: [(0, '15.633')] |
|
[2025-03-08 08:57:31,730][03210] Updated weights for policy 0, policy_version 740 (0.0026) |
|
[2025-03-08 08:57:32,437][00316] Fps is (10 sec: 3685.9, 60 sec: 3891.1, 300 sec: 3859.9). Total num frames: 3031040. Throughput: 0: 968.2. Samples: 757564. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:57:32,439][00316] Avg episode reward: [(0, '16.067')] |
|
[2025-03-08 08:57:37,435][00316] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 3051520. Throughput: 0: 993.7. Samples: 763502. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:57:37,436][00316] Avg episode reward: [(0, '16.463')] |
|
[2025-03-08 08:57:41,040][03210] Updated weights for policy 0, policy_version 750 (0.0027) |
|
[2025-03-08 08:57:42,435][00316] Fps is (10 sec: 4506.2, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3076096. Throughput: 0: 993.8. Samples: 766944. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:57:42,437][00316] Avg episode reward: [(0, '16.542')] |
|
[2025-03-08 08:57:47,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3959.7, 300 sec: 3873.8). Total num frames: 3092480. Throughput: 0: 969.5. Samples: 772264. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:57:47,440][00316] Avg episode reward: [(0, '17.616')] |
|
[2025-03-08 08:57:47,448][03197] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000755_3092480.pth... |
|
[2025-03-08 08:57:47,572][03197] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000527_2158592.pth |
|
[2025-03-08 08:57:47,591][03197] Saving new best policy, reward=17.616! |
|
[2025-03-08 08:57:51,811][03210] Updated weights for policy 0, policy_version 760 (0.0014) |
|
[2025-03-08 08:57:52,435][00316] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 3112960. Throughput: 0: 992.0. Samples: 778438. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:57:52,439][00316] Avg episode reward: [(0, '17.218')] |
|
[2025-03-08 08:57:57,437][00316] Fps is (10 sec: 4095.4, 60 sec: 3959.4, 300 sec: 3887.7). Total num frames: 3133440. Throughput: 0: 993.5. Samples: 781832. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:57:57,438][00316] Avg episode reward: [(0, '17.800')] |
|
[2025-03-08 08:57:57,447][03197] Saving new best policy, reward=17.800! |
|
[2025-03-08 08:58:02,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 3149824. Throughput: 0: 969.1. Samples: 786592. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2025-03-08 08:58:02,440][00316] Avg episode reward: [(0, '17.727')] |
|
[2025-03-08 08:58:03,088][03210] Updated weights for policy 0, policy_version 770 (0.0017) |
|
[2025-03-08 08:58:07,435][00316] Fps is (10 sec: 3686.9, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3170304. Throughput: 0: 993.6. Samples: 793158. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:58:07,440][00316] Avg episode reward: [(0, '16.155')] |
|
[2025-03-08 08:58:12,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3190784. Throughput: 0: 992.3. Samples: 796502. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:58:12,443][00316] Avg episode reward: [(0, '16.936')] |
|
[2025-03-08 08:58:13,041][03210] Updated weights for policy 0, policy_version 780 (0.0021) |
|
[2025-03-08 08:58:17,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3207168. Throughput: 0: 975.1. Samples: 801440. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:58:17,438][00316] Avg episode reward: [(0, '17.045')] |
|
[2025-03-08 08:58:22,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3227648. Throughput: 0: 976.9. Samples: 807462. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:58:22,438][00316] Avg episode reward: [(0, '16.943')] |
|
[2025-03-08 08:58:24,627][03210] Updated weights for policy 0, policy_version 790 (0.0020) |
|
[2025-03-08 08:58:27,437][00316] Fps is (10 sec: 3276.2, 60 sec: 3754.6, 300 sec: 3873.8). Total num frames: 3239936. Throughput: 0: 945.6. Samples: 809498. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:58:27,441][00316] Avg episode reward: [(0, '18.354')] |
|
[2025-03-08 08:58:27,461][03197] Saving new best policy, reward=18.354! |
|
[2025-03-08 08:58:32,435][00316] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3860.0). Total num frames: 3260416. Throughput: 0: 934.2. Samples: 814302. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:58:32,440][00316] Avg episode reward: [(0, '19.535')] |
|
[2025-03-08 08:58:32,443][03197] Saving new best policy, reward=19.535! |
|
[2025-03-08 08:58:35,834][03210] Updated weights for policy 0, policy_version 800 (0.0025) |
|
[2025-03-08 08:58:37,435][00316] Fps is (10 sec: 4096.7, 60 sec: 3822.9, 300 sec: 3873.9). Total num frames: 3280896. Throughput: 0: 939.6. Samples: 820720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:58:37,442][00316] Avg episode reward: [(0, '19.514')] |
|
[2025-03-08 08:58:42,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3860.0). Total num frames: 3297280. Throughput: 0: 933.8. Samples: 823850. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:58:42,436][00316] Avg episode reward: [(0, '20.124')] |
|
[2025-03-08 08:58:42,439][03197] Saving new best policy, reward=20.124! |
|
[2025-03-08 08:58:46,786][03210] Updated weights for policy 0, policy_version 810 (0.0024) |
|
[2025-03-08 08:58:47,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 3317760. Throughput: 0: 939.2. Samples: 828856. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:58:47,437][00316] Avg episode reward: [(0, '17.909')] |
|
[2025-03-08 08:58:52,435][00316] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3342336. Throughput: 0: 939.9. Samples: 835454. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 08:58:52,441][00316] Avg episode reward: [(0, '17.805')] |
|
[2025-03-08 08:58:57,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3860.0). Total num frames: 3354624. Throughput: 0: 927.0. Samples: 838218. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:58:57,438][00316] Avg episode reward: [(0, '17.231')] |
|
[2025-03-08 08:58:57,690][03210] Updated weights for policy 0, policy_version 820 (0.0021) |
|
[2025-03-08 08:59:02,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3379200. Throughput: 0: 939.5. Samples: 843718. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:59:02,437][00316] Avg episode reward: [(0, '16.704')] |
|
[2025-03-08 08:59:06,854][03210] Updated weights for policy 0, policy_version 830 (0.0024) |
|
[2025-03-08 08:59:07,435][00316] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3399680. Throughput: 0: 953.2. Samples: 850354. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 08:59:07,437][00316] Avg episode reward: [(0, '17.165')] |
|
[2025-03-08 08:59:12,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 3416064. Throughput: 0: 959.7. Samples: 852684. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:59:12,445][00316] Avg episode reward: [(0, '18.736')] |
|
[2025-03-08 08:59:17,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3436544. Throughput: 0: 985.1. Samples: 858630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 08:59:17,441][00316] Avg episode reward: [(0, '19.166')] |
|
[2025-03-08 08:59:17,456][03210] Updated weights for policy 0, policy_version 840 (0.0023) |
|
[2025-03-08 08:59:22,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3457024. Throughput: 0: 988.5. Samples: 865204. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:59:22,437][00316] Avg episode reward: [(0, '17.817')] |
|
[2025-03-08 08:59:27,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3860.0). Total num frames: 3473408. Throughput: 0: 964.6. Samples: 867258. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 08:59:27,437][00316] Avg episode reward: [(0, '17.140')] |
|
[2025-03-08 08:59:28,319][03210] Updated weights for policy 0, policy_version 850 (0.0021) |
|
[2025-03-08 08:59:32,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 3497984. Throughput: 0: 992.6. Samples: 873522. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-03-08 08:59:32,440][00316] Avg episode reward: [(0, '16.619')] |
|
[2025-03-08 08:59:37,436][00316] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3518464. Throughput: 0: 983.8. Samples: 879726. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:59:37,440][00316] Avg episode reward: [(0, '16.707')] |
|
[2025-03-08 08:59:38,674][03210] Updated weights for policy 0, policy_version 860 (0.0015) |
|
[2025-03-08 08:59:42,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 3534848. Throughput: 0: 966.4. Samples: 881706. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 08:59:42,437][00316] Avg episode reward: [(0, '17.575')] |
|
[2025-03-08 08:59:47,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 3555328. Throughput: 0: 990.5. Samples: 888292. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:59:47,436][00316] Avg episode reward: [(0, '18.187')] |
|
[2025-03-08 08:59:47,507][03197] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000869_3559424.pth... |
|
[2025-03-08 08:59:47,632][03197] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000640_2621440.pth |
|
[2025-03-08 08:59:48,626][03210] Updated weights for policy 0, policy_version 870 (0.0015) |
|
[2025-03-08 08:59:52,437][00316] Fps is (10 sec: 4095.3, 60 sec: 3891.1, 300 sec: 3887.7). Total num frames: 3575808. Throughput: 0: 971.2. Samples: 894058. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:59:52,438][00316] Avg episode reward: [(0, '20.380')] |
|
[2025-03-08 08:59:52,440][03197] Saving new best policy, reward=20.380! |
|
[2025-03-08 08:59:57,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 3592192. Throughput: 0: 967.5. Samples: 896220. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 08:59:57,437][00316] Avg episode reward: [(0, '21.948')] |
|
[2025-03-08 08:59:57,446][03197] Saving new best policy, reward=21.948! |
|
[2025-03-08 08:59:59,799][03210] Updated weights for policy 0, policy_version 880 (0.0026) |
|
[2025-03-08 09:00:02,435][00316] Fps is (10 sec: 3687.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3612672. Throughput: 0: 981.9. Samples: 902816. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 09:00:02,440][00316] Avg episode reward: [(0, '20.922')] |
|
[2025-03-08 09:00:07,435][00316] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 3633152. Throughput: 0: 959.2. Samples: 908370. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 09:00:07,440][00316] Avg episode reward: [(0, '19.924')] |
|
[2025-03-08 09:00:10,580][03210] Updated weights for policy 0, policy_version 890 (0.0018) |
|
[2025-03-08 09:00:12,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3649536. Throughput: 0: 969.9. Samples: 910904. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 09:00:12,437][00316] Avg episode reward: [(0, '19.046')] |
|
[2025-03-08 09:00:17,435][00316] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 3674112. Throughput: 0: 981.6. Samples: 917692. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-08 09:00:17,441][00316] Avg episode reward: [(0, '18.096')] |
|
[2025-03-08 09:00:20,694][03210] Updated weights for policy 0, policy_version 900 (0.0016) |
|
[2025-03-08 09:00:22,436][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 3690496. Throughput: 0: 956.7. Samples: 922778. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 09:00:22,438][00316] Avg episode reward: [(0, '19.299')] |
|
[2025-03-08 09:00:27,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 3710976. Throughput: 0: 978.9. Samples: 925758. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 09:00:27,437][00316] Avg episode reward: [(0, '19.854')] |
|
[2025-03-08 09:00:30,547][03210] Updated weights for policy 0, policy_version 910 (0.0019) |
|
[2025-03-08 09:00:32,435][00316] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3735552. Throughput: 0: 983.2. Samples: 932538. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-08 09:00:32,437][00316] Avg episode reward: [(0, '20.026')] |
|
[2025-03-08 09:00:37,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3747840. Throughput: 0: 962.9. Samples: 937388. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 09:00:37,437][00316] Avg episode reward: [(0, '21.615')] |
|
[2025-03-08 09:00:41,235][03210] Updated weights for policy 0, policy_version 920 (0.0013) |
|
[2025-03-08 09:00:42,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 3772416. Throughput: 0: 990.6. Samples: 940796. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 09:00:42,437][00316] Avg episode reward: [(0, '21.297')] |
|
[2025-03-08 09:00:47,435][00316] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3792896. Throughput: 0: 993.3. Samples: 947516. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-08 09:00:47,443][00316] Avg episode reward: [(0, '20.399')] |
|
[2025-03-08 09:00:52,161][03210] Updated weights for policy 0, policy_version 930 (0.0029) |
|
[2025-03-08 09:00:52,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3901.6). Total num frames: 3809280. Throughput: 0: 977.8. Samples: 952372. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 09:00:52,440][00316] Avg episode reward: [(0, '21.115')] |
|
[2025-03-08 09:00:57,441][00316] Fps is (10 sec: 3274.8, 60 sec: 3890.8, 300 sec: 3873.8). Total num frames: 3825664. Throughput: 0: 987.0. Samples: 955324. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 09:00:57,443][00316] Avg episode reward: [(0, '21.199')] |
|
[2025-03-08 09:01:02,435][00316] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 3842048. Throughput: 0: 954.0. Samples: 960620. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 09:01:02,437][00316] Avg episode reward: [(0, '19.643')] |
|
[2025-03-08 09:01:04,432][03210] Updated weights for policy 0, policy_version 940 (0.0015) |
|
[2025-03-08 09:01:07,435][00316] Fps is (10 sec: 3688.6, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3862528. Throughput: 0: 949.7. Samples: 965514. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-08 09:01:07,437][00316] Avg episode reward: [(0, '19.349')] |
|
[2025-03-08 09:01:12,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 3883008. Throughput: 0: 958.2. Samples: 968876. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 09:01:12,437][00316] Avg episode reward: [(0, '19.351')] |
|
[2025-03-08 09:01:13,603][03210] Updated weights for policy 0, policy_version 950 (0.0020) |
|
[2025-03-08 09:01:17,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 3903488. Throughput: 0: 950.6. Samples: 975314. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 09:01:17,440][00316] Avg episode reward: [(0, '19.273')] |
|
[2025-03-08 09:01:22,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3919872. Throughput: 0: 955.3. Samples: 980376. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2025-03-08 09:01:22,440][00316] Avg episode reward: [(0, '19.942')] |
|
[2025-03-08 09:01:24,430][03210] Updated weights for policy 0, policy_version 960 (0.0020) |
|
[2025-03-08 09:01:27,435][00316] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3944448. Throughput: 0: 955.1. Samples: 983776. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-03-08 09:01:27,438][00316] Avg episode reward: [(0, '19.984')] |
|
[2025-03-08 09:01:32,438][00316] Fps is (10 sec: 4094.9, 60 sec: 3754.5, 300 sec: 3887.7). Total num frames: 3960832. Throughput: 0: 941.1. Samples: 989868. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-08 09:01:32,440][00316] Avg episode reward: [(0, '20.115')] |
|
[2025-03-08 09:01:35,338][03210] Updated weights for policy 0, policy_version 970 (0.0018) |
|
[2025-03-08 09:01:37,435][00316] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 3981312. Throughput: 0: 957.7. Samples: 995468. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-08 09:01:37,437][00316] Avg episode reward: [(0, '20.687')] |
|
[2025-03-08 09:01:42,436][03197] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-03-08 09:01:42,444][00316] Component Batcher_0 stopped! |
|
[2025-03-08 09:01:42,453][00316] Component RolloutWorker_w3 process died already! Don't wait for it. |
|
[2025-03-08 09:01:42,443][03197] Stopping Batcher_0... |
|
[2025-03-08 09:01:42,463][03197] Loop batcher_evt_loop terminating... |
|
[2025-03-08 09:01:42,528][03210] Weights refcount: 2 0 |
|
[2025-03-08 09:01:42,532][03210] Stopping InferenceWorker_p0-w0... |
|
[2025-03-08 09:01:42,531][00316] Component InferenceWorker_p0-w0 stopped! |
|
[2025-03-08 09:01:42,533][03210] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-03-08 09:01:42,592][03197] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000755_3092480.pth |
|
[2025-03-08 09:01:42,612][03197] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-03-08 09:01:42,800][03197] Stopping LearnerWorker_p0... |
|
[2025-03-08 09:01:42,801][03197] Loop learner_proc0_evt_loop terminating... |
|
[2025-03-08 09:01:42,800][00316] Component LearnerWorker_p0 stopped! |
|
[2025-03-08 09:01:42,818][00316] Component RolloutWorker_w6 stopped! |
|
[2025-03-08 09:01:42,820][03217] Stopping RolloutWorker_w6... |
|
[2025-03-08 09:01:42,823][03217] Loop rollout_proc6_evt_loop terminating... |
|
[2025-03-08 09:01:42,833][03212] Stopping RolloutWorker_w1... |
|
[2025-03-08 09:01:42,833][00316] Component RolloutWorker_w1 stopped! |
|
[2025-03-08 09:01:42,837][03214] Stopping RolloutWorker_w2... |
|
[2025-03-08 09:01:42,838][00316] Component RolloutWorker_w2 stopped! |
|
[2025-03-08 09:01:42,840][03214] Loop rollout_proc2_evt_loop terminating... |
|
[2025-03-08 09:01:42,845][03216] Stopping RolloutWorker_w5... |
|
[2025-03-08 09:01:42,845][00316] Component RolloutWorker_w5 stopped! |
|
[2025-03-08 09:01:42,834][03212] Loop rollout_proc1_evt_loop terminating... |
|
[2025-03-08 09:01:42,852][00316] Component RolloutWorker_w0 stopped! |
|
[2025-03-08 09:01:42,856][03211] Stopping RolloutWorker_w0... |
|
[2025-03-08 09:01:42,859][03211] Loop rollout_proc0_evt_loop terminating... |
|
[2025-03-08 09:01:42,855][03216] Loop rollout_proc5_evt_loop terminating... |
|
[2025-03-08 09:01:42,866][00316] Component RolloutWorker_w7 stopped! |
|
[2025-03-08 09:01:42,866][03218] Stopping RolloutWorker_w7... |
|
[2025-03-08 09:01:42,871][03218] Loop rollout_proc7_evt_loop terminating... |
|
[2025-03-08 09:01:42,882][00316] Component RolloutWorker_w4 stopped! |
|
[2025-03-08 09:01:42,885][00316] Waiting for process learner_proc0 to stop... |
|
[2025-03-08 09:01:42,888][03215] Stopping RolloutWorker_w4... |
|
[2025-03-08 09:01:42,889][03215] Loop rollout_proc4_evt_loop terminating... |
|
[2025-03-08 09:01:45,244][00316] Waiting for process inference_proc0-0 to join... |
|
[2025-03-08 09:01:45,248][00316] Waiting for process rollout_proc0 to join... |
|
[2025-03-08 09:01:47,483][00316] Waiting for process rollout_proc1 to join... |
|
[2025-03-08 09:01:47,484][00316] Waiting for process rollout_proc2 to join... |
|
[2025-03-08 09:01:47,487][00316] Waiting for process rollout_proc3 to join... |
|
[2025-03-08 09:01:47,489][00316] Waiting for process rollout_proc4 to join... |
|
[2025-03-08 09:01:47,492][00316] Waiting for process rollout_proc5 to join... |
|
[2025-03-08 09:01:47,494][00316] Waiting for process rollout_proc6 to join... |
|
[2025-03-08 09:01:47,496][00316] Waiting for process rollout_proc7 to join... |
|
[2025-03-08 09:01:47,497][00316] Batcher 0 profile tree view: |
|
batching: 23.8225, releasing_batches: 0.0259 |
|
[2025-03-08 09:01:47,498][00316] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0031 |
|
wait_policy_total: 416.1764 |
|
update_model: 8.8323 |
|
weight_update: 0.0019 |
|
one_step: 0.0133 |
|
handle_policy_step: 584.7898 |
|
deserialize: 13.7620, stack: 3.2269, obs_to_device_normalize: 126.4398, forward: 306.8411, send_messages: 25.0445 |
|
prepare_outputs: 85.0717 |
|
to_cpu: 52.7674 |
|
[2025-03-08 09:01:47,499][00316] Learner 0 profile tree view: |
|
misc: 0.0043, prepare_batch: 12.6972 |
|
train: 71.1291 |
|
epoch_init: 0.0114, minibatch_init: 0.0064, losses_postprocess: 0.6376, kl_divergence: 0.6184, after_optimizer: 33.3681 |
|
calculate_losses: 24.6761 |
|
losses_init: 0.0063, forward_head: 1.2746, bptt_initial: 16.5428, tail: 1.0035, advantages_returns: 0.2725, losses: 3.5039 |
|
bptt: 1.8657 |
|
bptt_forward_core: 1.8047 |
|
update: 11.2362 |
|
clip: 0.8824 |
|
[2025-03-08 09:01:47,501][00316] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.3196, enqueue_policy_requests: 96.1852, env_step: 834.5614, overhead: 13.6333, complete_rollouts: 8.2939 |
|
save_policy_outputs: 20.4616 |
|
split_output_tensors: 7.6808 |
|
[2025-03-08 09:01:47,502][00316] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.2320, enqueue_policy_requests: 134.6233, env_step: 793.8049, overhead: 13.0153, complete_rollouts: 6.3735 |
|
save_policy_outputs: 18.7055 |
|
split_output_tensors: 7.1439 |
|
[2025-03-08 09:01:47,503][00316] Loop Runner_EvtLoop terminating... |
|
[2025-03-08 09:01:47,504][00316] Runner profile tree view: |
|
main_loop: 1074.0684 |
|
[2025-03-08 09:01:47,505][00316] Collected {0: 4005888}, FPS: 3729.6 |
|
[2025-03-08 09:02:00,993][00316] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-03-08 09:02:00,994][00316] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-03-08 09:02:00,995][00316] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-03-08 09:02:00,996][00316] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-03-08 09:02:00,997][00316] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-03-08 09:02:00,998][00316] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-03-08 09:02:00,999][00316] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-03-08 09:02:01,000][00316] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-03-08 09:02:01,000][00316] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-03-08 09:02:01,001][00316] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-03-08 09:02:01,002][00316] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-03-08 09:02:01,006][00316] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-03-08 09:02:01,007][00316] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-03-08 09:02:01,008][00316] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-03-08 09:02:01,009][00316] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-03-08 09:02:01,039][00316] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 09:02:01,042][00316] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-08 09:02:01,044][00316] RunningMeanStd input shape: (1,) |
|
[2025-03-08 09:02:01,058][00316] ConvEncoder: input_channels=3 |
|
[2025-03-08 09:02:01,162][00316] Conv encoder output size: 512 |
|
[2025-03-08 09:02:01,163][00316] Policy head output size: 512 |
|
[2025-03-08 09:02:01,431][00316] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-03-08 09:02:02,180][00316] Num frames 100... |
|
[2025-03-08 09:02:02,309][00316] Num frames 200... |
|
[2025-03-08 09:02:02,437][00316] Num frames 300... |
|
[2025-03-08 09:02:02,566][00316] Num frames 400... |
|
[2025-03-08 09:02:02,693][00316] Num frames 500... |
|
[2025-03-08 09:02:02,823][00316] Num frames 600... |
|
[2025-03-08 09:02:02,952][00316] Num frames 700... |
|
[2025-03-08 09:02:03,086][00316] Num frames 800... |
|
[2025-03-08 09:02:03,223][00316] Num frames 900... |
|
[2025-03-08 09:02:03,351][00316] Num frames 1000... |
|
[2025-03-08 09:02:03,481][00316] Num frames 1100... |
|
[2025-03-08 09:02:03,643][00316] Avg episode rewards: #0: 24.840, true rewards: #0: 11.840 |
|
[2025-03-08 09:02:03,644][00316] Avg episode reward: 24.840, avg true_objective: 11.840 |
|
[2025-03-08 09:02:03,668][00316] Num frames 1200... |
|
[2025-03-08 09:02:03,795][00316] Num frames 1300... |
|
[2025-03-08 09:02:03,925][00316] Num frames 1400... |
|
[2025-03-08 09:02:04,063][00316] Num frames 1500... |
|
[2025-03-08 09:02:04,196][00316] Num frames 1600... |
|
[2025-03-08 09:02:04,374][00316] Avg episode rewards: #0: 16.480, true rewards: #0: 8.480 |
|
[2025-03-08 09:02:04,375][00316] Avg episode reward: 16.480, avg true_objective: 8.480 |
|
[2025-03-08 09:02:04,383][00316] Num frames 1700... |
|
[2025-03-08 09:02:04,511][00316] Num frames 1800... |
|
[2025-03-08 09:02:04,638][00316] Num frames 1900... |
|
[2025-03-08 09:02:04,765][00316] Num frames 2000... |
|
[2025-03-08 09:02:04,893][00316] Num frames 2100... |
|
[2025-03-08 09:02:05,028][00316] Num frames 2200... |
|
[2025-03-08 09:02:05,133][00316] Avg episode rewards: #0: 13.467, true rewards: #0: 7.467 |
|
[2025-03-08 09:02:05,134][00316] Avg episode reward: 13.467, avg true_objective: 7.467 |
|
[2025-03-08 09:02:05,218][00316] Num frames 2300... |
|
[2025-03-08 09:02:05,347][00316] Num frames 2400... |
|
[2025-03-08 09:02:05,477][00316] Num frames 2500... |
|
[2025-03-08 09:02:05,606][00316] Num frames 2600... |
|
[2025-03-08 09:02:05,737][00316] Num frames 2700... |
|
[2025-03-08 09:02:05,869][00316] Num frames 2800... |
|
[2025-03-08 09:02:05,999][00316] Num frames 2900... |
|
[2025-03-08 09:02:06,143][00316] Num frames 3000... |
|
[2025-03-08 09:02:06,276][00316] Num frames 3100... |
|
[2025-03-08 09:02:06,405][00316] Num frames 3200... |
|
[2025-03-08 09:02:06,534][00316] Num frames 3300... |
|
[2025-03-08 09:02:06,662][00316] Num frames 3400... |
|
[2025-03-08 09:02:06,749][00316] Avg episode rewards: #0: 17.560, true rewards: #0: 8.560 |
|
[2025-03-08 09:02:06,750][00316] Avg episode reward: 17.560, avg true_objective: 8.560 |
|
[2025-03-08 09:02:06,851][00316] Num frames 3500... |
|
[2025-03-08 09:02:06,981][00316] Num frames 3600... |
|
[2025-03-08 09:02:07,118][00316] Num frames 3700... |
|
[2025-03-08 09:02:07,254][00316] Num frames 3800... |
|
[2025-03-08 09:02:07,383][00316] Num frames 3900... |
|
[2025-03-08 09:02:07,561][00316] Num frames 4000... |
|
[2025-03-08 09:02:07,694][00316] Avg episode rewards: #0: 17.088, true rewards: #0: 8.088 |
|
[2025-03-08 09:02:07,696][00316] Avg episode reward: 17.088, avg true_objective: 8.088 |
|
[2025-03-08 09:02:07,797][00316] Num frames 4100... |
|
[2025-03-08 09:02:07,963][00316] Num frames 4200... |
|
[2025-03-08 09:02:08,147][00316] Num frames 4300... |
|
[2025-03-08 09:02:08,320][00316] Num frames 4400... |
|
[2025-03-08 09:02:08,488][00316] Num frames 4500... |
|
[2025-03-08 09:02:08,654][00316] Num frames 4600... |
|
[2025-03-08 09:02:08,832][00316] Num frames 4700... |
|
[2025-03-08 09:02:08,974][00316] Avg episode rewards: #0: 16.913, true rewards: #0: 7.913 |
|
[2025-03-08 09:02:08,975][00316] Avg episode reward: 16.913, avg true_objective: 7.913 |
|
[2025-03-08 09:02:09,068][00316] Num frames 4800... |
|
[2025-03-08 09:02:09,263][00316] Num frames 4900... |
|
[2025-03-08 09:02:09,393][00316] Num frames 5000... |
|
[2025-03-08 09:02:09,532][00316] Num frames 5100... |
|
[2025-03-08 09:02:09,661][00316] Num frames 5200... |
|
[2025-03-08 09:02:09,790][00316] Num frames 5300... |
|
[2025-03-08 09:02:09,918][00316] Num frames 5400... |
|
[2025-03-08 09:02:10,044][00316] Num frames 5500... |
|
[2025-03-08 09:02:10,119][00316] Avg episode rewards: #0: 16.737, true rewards: #0: 7.880 |
|
[2025-03-08 09:02:10,120][00316] Avg episode reward: 16.737, avg true_objective: 7.880 |
|
[2025-03-08 09:02:10,241][00316] Num frames 5600... |
|
[2025-03-08 09:02:10,369][00316] Num frames 5700... |
|
[2025-03-08 09:02:10,500][00316] Num frames 5800... |
|
[2025-03-08 09:02:10,626][00316] Num frames 5900... |
|
[2025-03-08 09:02:10,763][00316] Avg episode rewards: #0: 15.580, true rewards: #0: 7.455 |
|
[2025-03-08 09:02:10,763][00316] Avg episode reward: 15.580, avg true_objective: 7.455 |
|
[2025-03-08 09:02:10,812][00316] Num frames 6000... |
|
[2025-03-08 09:02:10,939][00316] Num frames 6100... |
|
[2025-03-08 09:02:11,068][00316] Num frames 6200... |
|
[2025-03-08 09:02:11,198][00316] Num frames 6300... |
|
[2025-03-08 09:02:11,335][00316] Num frames 6400... |
|
[2025-03-08 09:02:11,463][00316] Num frames 6500... |
|
[2025-03-08 09:02:11,592][00316] Num frames 6600... |
|
[2025-03-08 09:02:11,718][00316] Num frames 6700... |
|
[2025-03-08 09:02:11,781][00316] Avg episode rewards: #0: 15.339, true rewards: #0: 7.450 |
|
[2025-03-08 09:02:11,782][00316] Avg episode reward: 15.339, avg true_objective: 7.450 |
|
[2025-03-08 09:02:11,903][00316] Num frames 6800... |
|
[2025-03-08 09:02:12,030][00316] Num frames 6900... |
|
[2025-03-08 09:02:12,159][00316] Num frames 7000... |
|
[2025-03-08 09:02:12,293][00316] Num frames 7100... |
|
[2025-03-08 09:02:12,419][00316] Num frames 7200... |
|
[2025-03-08 09:02:12,548][00316] Num frames 7300... |
|
[2025-03-08 09:02:12,681][00316] Num frames 7400... |
|
[2025-03-08 09:02:12,810][00316] Num frames 7500... |
|
[2025-03-08 09:02:12,939][00316] Num frames 7600... |
|
[2025-03-08 09:02:13,066][00316] Num frames 7700... |
|
[2025-03-08 09:02:13,197][00316] Num frames 7800... |
|
[2025-03-08 09:02:13,326][00316] Avg episode rewards: #0: 16.452, true rewards: #0: 7.852 |
|
[2025-03-08 09:02:13,327][00316] Avg episode reward: 16.452, avg true_objective: 7.852 |
|
[2025-03-08 09:02:59,838][00316] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-03-08 09:04:28,958][00316] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-03-08 09:04:28,959][00316] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-03-08 09:04:28,960][00316] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-03-08 09:04:28,960][00316] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-03-08 09:04:28,961][00316] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-03-08 09:04:28,962][00316] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-03-08 09:04:28,963][00316] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-03-08 09:04:28,964][00316] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-03-08 09:04:28,965][00316] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-03-08 09:04:28,966][00316] Adding new argument 'hf_repository'='ThomasSimonini/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-03-08 09:04:28,966][00316] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-03-08 09:04:28,967][00316] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-03-08 09:04:28,968][00316] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-03-08 09:04:28,969][00316] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-03-08 09:04:28,970][00316] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-03-08 09:04:28,995][00316] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-08 09:04:28,996][00316] RunningMeanStd input shape: (1,) |
|
[2025-03-08 09:04:29,007][00316] ConvEncoder: input_channels=3 |
|
[2025-03-08 09:04:29,041][00316] Conv encoder output size: 512 |
|
[2025-03-08 09:04:29,042][00316] Policy head output size: 512 |
|
[2025-03-08 09:04:29,061][00316] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-03-08 09:04:29,502][00316] Num frames 100... |
|
[2025-03-08 09:04:29,629][00316] Num frames 200... |
|
[2025-03-08 09:04:29,756][00316] Num frames 300... |
|
[2025-03-08 09:04:29,885][00316] Num frames 400... |
|
[2025-03-08 09:04:30,015][00316] Num frames 500... |
|
[2025-03-08 09:04:30,146][00316] Num frames 600... |
|
[2025-03-08 09:04:30,277][00316] Num frames 700... |
|
[2025-03-08 09:04:30,431][00316] Avg episode rewards: #0: 12.770, true rewards: #0: 7.770 |
|
[2025-03-08 09:04:30,432][00316] Avg episode reward: 12.770, avg true_objective: 7.770 |
|
[2025-03-08 09:04:30,465][00316] Num frames 800... |
|
[2025-03-08 09:04:30,601][00316] Num frames 900... |
|
[2025-03-08 09:04:30,730][00316] Num frames 1000... |
|
[2025-03-08 09:04:30,868][00316] Num frames 1100... |
|
[2025-03-08 09:04:30,993][00316] Num frames 1200... |
|
[2025-03-08 09:04:31,124][00316] Num frames 1300... |
|
[2025-03-08 09:04:31,257][00316] Num frames 1400... |
|
[2025-03-08 09:04:31,385][00316] Num frames 1500... |
|
[2025-03-08 09:04:31,538][00316] Avg episode rewards: #0: 14.885, true rewards: #0: 7.885 |
|
[2025-03-08 09:04:31,539][00316] Avg episode reward: 14.885, avg true_objective: 7.885 |
|
[2025-03-08 09:04:31,570][00316] Num frames 1600... |
|
[2025-03-08 09:04:31,697][00316] Num frames 1700... |
|
[2025-03-08 09:04:31,825][00316] Num frames 1800... |
|
[2025-03-08 09:04:31,952][00316] Num frames 1900... |
|
[2025-03-08 09:04:32,079][00316] Num frames 2000... |
|
[2025-03-08 09:04:32,209][00316] Num frames 2100... |
|
[2025-03-08 09:04:32,335][00316] Num frames 2200... |
|
[2025-03-08 09:04:32,467][00316] Num frames 2300... |
|
[2025-03-08 09:04:32,604][00316] Num frames 2400... |
|
[2025-03-08 09:04:32,734][00316] Num frames 2500... |
|
[2025-03-08 09:04:32,866][00316] Num frames 2600... |
|
[2025-03-08 09:04:32,924][00316] Avg episode rewards: #0: 16.670, true rewards: #0: 8.670 |
|
[2025-03-08 09:04:32,925][00316] Avg episode reward: 16.670, avg true_objective: 8.670 |
|
[2025-03-08 09:04:33,054][00316] Num frames 2700... |
|
[2025-03-08 09:04:33,186][00316] Num frames 2800... |
|
[2025-03-08 09:04:33,318][00316] Num frames 2900... |
|
[2025-03-08 09:04:33,447][00316] Num frames 3000... |
|
[2025-03-08 09:04:33,581][00316] Num frames 3100... |
|
[2025-03-08 09:04:33,712][00316] Num frames 3200... |
|
[2025-03-08 09:04:33,842][00316] Num frames 3300... |
|
[2025-03-08 09:04:33,970][00316] Num frames 3400... |
|
[2025-03-08 09:04:34,069][00316] Avg episode rewards: #0: 16.083, true rewards: #0: 8.582 |
|
[2025-03-08 09:04:34,070][00316] Avg episode reward: 16.083, avg true_objective: 8.582 |
|
[2025-03-08 09:04:34,164][00316] Num frames 3500... |
|
[2025-03-08 09:04:34,296][00316] Num frames 3600... |
|
[2025-03-08 09:04:34,425][00316] Num frames 3700... |
|
[2025-03-08 09:04:34,558][00316] Num frames 3800... |
|
[2025-03-08 09:04:34,693][00316] Num frames 3900... |
|
[2025-03-08 09:04:34,822][00316] Num frames 4000... |
|
[2025-03-08 09:04:34,955][00316] Num frames 4100... |
|
[2025-03-08 09:04:35,084][00316] Num frames 4200... |
|
[2025-03-08 09:04:35,226][00316] Avg episode rewards: #0: 16.130, true rewards: #0: 8.530 |
|
[2025-03-08 09:04:35,227][00316] Avg episode reward: 16.130, avg true_objective: 8.530 |
|
[2025-03-08 09:04:35,272][00316] Num frames 4300... |
|
[2025-03-08 09:04:35,404][00316] Num frames 4400... |
|
[2025-03-08 09:04:35,540][00316] Num frames 4500... |
|
[2025-03-08 09:04:35,673][00316] Num frames 4600... |
|
[2025-03-08 09:04:35,802][00316] Num frames 4700... |
|
[2025-03-08 09:04:35,929][00316] Num frames 4800... |
|
[2025-03-08 09:04:36,057][00316] Num frames 4900... |
|
[2025-03-08 09:04:36,205][00316] Avg episode rewards: #0: 15.282, true rewards: #0: 8.282 |
|
[2025-03-08 09:04:36,206][00316] Avg episode reward: 15.282, avg true_objective: 8.282 |
|
[2025-03-08 09:04:36,247][00316] Num frames 5000... |
|
[2025-03-08 09:04:36,374][00316] Num frames 5100... |
|
[2025-03-08 09:04:36,503][00316] Num frames 5200... |
|
[2025-03-08 09:04:36,635][00316] Num frames 5300... |
|
[2025-03-08 09:04:36,774][00316] Num frames 5400... |
|
[2025-03-08 09:04:36,904][00316] Num frames 5500... |
|
[2025-03-08 09:04:37,043][00316] Num frames 5600... |
|
[2025-03-08 09:06:10,537][11505] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-03-08 09:06:10,542][11505] Rollout worker 0 uses device cpu |
|
[2025-03-08 09:06:10,544][11505] Rollout worker 1 uses device cpu |
|
[2025-03-08 09:06:10,547][11505] Rollout worker 2 uses device cpu |
|
[2025-03-08 09:06:10,549][11505] Rollout worker 3 uses device cpu |
|
[2025-03-08 09:06:10,551][11505] Rollout worker 4 uses device cpu |
|
[2025-03-08 09:06:10,553][11505] Rollout worker 5 uses device cpu |
|
[2025-03-08 09:06:10,555][11505] Rollout worker 6 uses device cpu |
|
[2025-03-08 09:06:10,558][11505] Rollout worker 7 uses device cpu |
|
[2025-03-08 09:06:10,775][11505] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-08 09:06:10,777][11505] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-03-08 09:06:10,820][11505] Starting all processes... |
|
[2025-03-08 09:06:10,821][11505] Starting process learner_proc0 |
|
[2025-03-08 09:06:10,925][11505] Starting all processes... |
|
[2025-03-08 09:06:10,964][11505] Starting process inference_proc0-0 |
|
[2025-03-08 09:06:10,970][11505] Starting process rollout_proc0 |
|
[2025-03-08 09:06:10,979][11505] Starting process rollout_proc1 |
|
[2025-03-08 09:06:10,991][11505] Starting process rollout_proc2 |
|
[2025-03-08 09:06:10,994][11505] Starting process rollout_proc3 |
|
[2025-03-08 09:06:10,994][11505] Starting process rollout_proc4 |
|
[2025-03-08 09:06:10,994][11505] Starting process rollout_proc5 |
|
[2025-03-08 09:06:10,994][11505] Starting process rollout_proc6 |
|
[2025-03-08 09:06:10,994][11505] Starting process rollout_proc7 |
|
[2025-03-08 09:06:30,908][12022] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-08 09:06:30,909][12022] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-03-08 09:06:30,992][12022] Num visible devices: 1 |
|
[2025-03-08 09:06:31,025][11505] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-03-08 09:06:31,081][12024] Worker 2 uses CPU cores [0] |
|
[2025-03-08 09:06:31,156][12009] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-08 09:06:31,157][12009] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-03-08 09:06:31,208][12009] Num visible devices: 1 |
|
[2025-03-08 09:06:31,239][12009] Starting seed is not provided |
|
[2025-03-08 09:06:31,240][12009] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-08 09:06:31,241][12009] Initializing actor-critic model on device cuda:0 |
|
[2025-03-08 09:06:31,242][12009] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-08 09:06:31,244][12009] RunningMeanStd input shape: (1,) |
|
[2025-03-08 09:06:31,257][12028] Worker 6 uses CPU cores [0] |
|
[2025-03-08 09:06:31,259][11505] Heartbeat connected on RolloutWorker_w2 |
|
[2025-03-08 09:06:31,269][12026] Worker 4 uses CPU cores [0] |
|
[2025-03-08 09:06:31,324][12025] Worker 5 uses CPU cores [1] |
|
[2025-03-08 09:06:31,399][11505] Heartbeat connected on RolloutWorker_w6 |
|
[2025-03-08 09:06:31,407][12029] Worker 1 uses CPU cores [1] |
|
[2025-03-08 09:06:31,414][11505] Heartbeat connected on RolloutWorker_w5 |
|
[2025-03-08 09:06:31,429][12027] Worker 3 uses CPU cores [1] |
|
[2025-03-08 09:06:31,435][11505] Heartbeat connected on RolloutWorker_w1 |
|
[2025-03-08 09:06:31,438][11505] Heartbeat connected on RolloutWorker_w3 |
|
[2025-03-08 09:06:31,451][12030] Worker 7 uses CPU cores [1] |
|
[2025-03-08 09:06:31,462][11505] Heartbeat connected on RolloutWorker_w7 |
|
[2025-03-08 09:06:31,471][11505] Heartbeat connected on RolloutWorker_w4 |
|
[2025-03-08 09:06:31,492][11505] Heartbeat connected on Batcher_0 |
|
[2025-03-08 09:06:31,501][12023] Worker 0 uses CPU cores [0] |
|
[2025-03-08 09:06:31,507][12009] ConvEncoder: input_channels=3 |
|
[2025-03-08 09:06:31,511][11505] Heartbeat connected on RolloutWorker_w0 |
|
[2025-03-08 09:06:31,614][12009] Conv encoder output size: 512 |
|
[2025-03-08 09:06:31,614][12009] Policy head output size: 512 |
|
[2025-03-08 09:06:31,630][12009] Created Actor Critic model with architecture: |
|
[2025-03-08 09:06:31,630][12009] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-03-08 09:06:31,893][12009] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-03-08 09:06:32,811][12009] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-03-08 09:06:32,846][12009] Loading model from checkpoint |
|
[2025-03-08 09:06:32,848][12009] Loaded experiment state at self.train_step=978, self.env_steps=4005888 |
|
[2025-03-08 09:06:32,849][12009] Initialized policy 0 weights for model version 978 |
|
[2025-03-08 09:06:32,851][12009] LearnerWorker_p0 finished initialization! |
|
[2025-03-08 09:06:32,852][12009] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-08 09:06:32,852][11505] Heartbeat connected on LearnerWorker_p0 |
|
[2025-03-08 09:06:33,063][12022] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-08 09:06:33,065][12022] RunningMeanStd input shape: (1,) |
|
[2025-03-08 09:06:33,133][12022] ConvEncoder: input_channels=3 |
|
[2025-03-08 09:06:33,239][12022] Conv encoder output size: 512 |
|
[2025-03-08 09:06:33,240][12022] Policy head output size: 512 |
|
[2025-03-08 09:06:33,276][11505] Inference worker 0-0 is ready! |
|
[2025-03-08 09:06:33,277][11505] All inference workers are ready! Signal rollout workers to start! |
|
[2025-03-08 09:06:33,492][12028] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 09:06:33,490][12024] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 09:06:33,500][12030] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 09:06:33,498][12029] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 09:06:33,498][12023] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 09:06:33,502][12027] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 09:06:33,495][12025] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 09:06:33,500][12026] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 09:06:33,751][11505] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-03-08 09:06:34,111][12024] Decorrelating experience for 0 frames... |
|
[2025-03-08 09:06:34,806][12025] Decorrelating experience for 0 frames... |
|
[2025-03-08 09:06:34,809][12030] Decorrelating experience for 0 frames... |
|
[2025-03-08 09:06:34,812][12027] Decorrelating experience for 0 frames... |
|
[2025-03-08 09:06:34,814][12029] Decorrelating experience for 0 frames... |
|
[2025-03-08 09:06:35,576][12025] Decorrelating experience for 32 frames... |
|
[2025-03-08 09:06:35,578][12029] Decorrelating experience for 32 frames... |
|
[2025-03-08 09:06:35,940][12024] Decorrelating experience for 32 frames... |
|
[2025-03-08 09:06:36,011][12028] Decorrelating experience for 0 frames... |
|
[2025-03-08 09:06:36,026][12023] Decorrelating experience for 0 frames... |
|
[2025-03-08 09:06:36,696][12027] Decorrelating experience for 32 frames... |
|
[2025-03-08 09:06:36,885][12028] Decorrelating experience for 32 frames... |
|
[2025-03-08 09:06:36,903][12023] Decorrelating experience for 32 frames... |
|
[2025-03-08 09:06:37,123][12025] Decorrelating experience for 64 frames... |
|
[2025-03-08 09:06:37,335][12029] Decorrelating experience for 64 frames... |
|
[2025-03-08 09:06:38,116][12030] Decorrelating experience for 32 frames... |
|
[2025-03-08 09:06:38,455][12026] Decorrelating experience for 0 frames... |
|
[2025-03-08 09:06:38,751][11505] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-03-08 09:06:38,773][12028] Decorrelating experience for 64 frames... |
|
[2025-03-08 09:06:38,801][12027] Decorrelating experience for 64 frames... |
|
[2025-03-08 09:06:38,845][12023] Decorrelating experience for 64 frames... |
|
[2025-03-08 09:06:38,974][12024] Decorrelating experience for 64 frames... |
|
[2025-03-08 09:06:39,467][12029] Decorrelating experience for 96 frames... |
|
[2025-03-08 09:06:40,690][12026] Decorrelating experience for 32 frames... |
|
[2025-03-08 09:06:40,761][12028] Decorrelating experience for 96 frames... |
|
[2025-03-08 09:06:41,954][12025] Decorrelating experience for 96 frames... |
|
[2025-03-08 09:06:42,047][12030] Decorrelating experience for 64 frames... |
|
[2025-03-08 09:06:42,229][12027] Decorrelating experience for 96 frames... |
|
[2025-03-08 09:06:42,934][12024] Decorrelating experience for 96 frames... |
|
[2025-03-08 09:06:43,751][11505] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 46.2. Samples: 462. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-03-08 09:06:43,752][11505] Avg episode reward: [(0, '3.090')] |
|
[2025-03-08 09:06:45,025][12023] Decorrelating experience for 96 frames... |
|
[2025-03-08 09:06:45,371][12026] Decorrelating experience for 64 frames... |
|
[2025-03-08 09:06:45,409][12030] Decorrelating experience for 96 frames... |
|
[2025-03-08 09:06:46,164][12009] Signal inference workers to stop experience collection... |
|
[2025-03-08 09:06:46,181][12022] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-03-08 09:06:46,531][12026] Decorrelating experience for 96 frames... |
|
[2025-03-08 09:06:46,876][12009] Signal inference workers to resume experience collection... |
|
[2025-03-08 09:06:46,877][12022] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-03-08 09:06:46,882][12009] Stopping Batcher_0... |
|
[2025-03-08 09:06:46,883][12009] Loop batcher_evt_loop terminating... |
|
[2025-03-08 09:06:46,883][11505] Component Batcher_0 stopped! |
|
[2025-03-08 09:06:46,997][12022] Weights refcount: 2 0 |
|
[2025-03-08 09:06:47,005][11505] Component InferenceWorker_p0-w0 stopped! |
|
[2025-03-08 09:06:47,007][12022] Stopping InferenceWorker_p0-w0... |
|
[2025-03-08 09:06:47,008][12022] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-03-08 09:06:47,168][11505] Component RolloutWorker_w1 stopped! |
|
[2025-03-08 09:06:47,167][12029] Stopping RolloutWorker_w1... |
|
[2025-03-08 09:06:47,172][12029] Loop rollout_proc1_evt_loop terminating... |
|
[2025-03-08 09:06:47,195][11505] Component RolloutWorker_w5 stopped! |
|
[2025-03-08 09:06:47,199][12025] Stopping RolloutWorker_w5... |
|
[2025-03-08 09:06:47,207][11505] Component RolloutWorker_w3 stopped! |
|
[2025-03-08 09:06:47,210][12027] Stopping RolloutWorker_w3... |
|
[2025-03-08 09:06:47,200][12025] Loop rollout_proc5_evt_loop terminating... |
|
[2025-03-08 09:06:47,212][12027] Loop rollout_proc3_evt_loop terminating... |
|
[2025-03-08 09:06:47,221][11505] Component RolloutWorker_w7 stopped! |
|
[2025-03-08 09:06:47,224][12030] Stopping RolloutWorker_w7... |
|
[2025-03-08 09:06:47,234][12030] Loop rollout_proc7_evt_loop terminating... |
|
[2025-03-08 09:06:47,333][11505] Component RolloutWorker_w4 stopped! |
|
[2025-03-08 09:06:47,334][12026] Stopping RolloutWorker_w4... |
|
[2025-03-08 09:06:47,335][12026] Loop rollout_proc4_evt_loop terminating... |
|
[2025-03-08 09:06:47,361][11505] Component RolloutWorker_w0 stopped! |
|
[2025-03-08 09:06:47,358][12028] Stopping RolloutWorker_w6... |
|
[2025-03-08 09:06:47,363][11505] Component RolloutWorker_w6 stopped! |
|
[2025-03-08 09:06:47,369][12028] Loop rollout_proc6_evt_loop terminating... |
|
[2025-03-08 09:06:47,369][12023] Stopping RolloutWorker_w0... |
|
[2025-03-08 09:06:47,375][12009] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2025-03-08 09:06:47,387][12024] Stopping RolloutWorker_w2... |
|
[2025-03-08 09:06:47,377][12023] Loop rollout_proc0_evt_loop terminating... |
|
[2025-03-08 09:06:47,388][12024] Loop rollout_proc2_evt_loop terminating... |
|
[2025-03-08 09:06:47,386][11505] Component RolloutWorker_w2 stopped! |
|
[2025-03-08 09:06:47,539][12009] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000869_3559424.pth |
|
[2025-03-08 09:06:47,557][12009] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2025-03-08 09:06:47,737][11505] Component LearnerWorker_p0 stopped! |
|
[2025-03-08 09:06:47,739][11505] Waiting for process learner_proc0 to stop... |
|
[2025-03-08 09:06:47,741][12009] Stopping LearnerWorker_p0... |
|
[2025-03-08 09:06:47,744][12009] Loop learner_proc0_evt_loop terminating... |
|
[2025-03-08 09:06:49,497][11505] Waiting for process inference_proc0-0 to join... |
|
[2025-03-08 09:06:49,499][11505] Waiting for process rollout_proc0 to join... |
|
[2025-03-08 09:06:51,467][11505] Waiting for process rollout_proc1 to join... |
|
[2025-03-08 09:06:51,472][11505] Waiting for process rollout_proc2 to join... |
|
[2025-03-08 09:06:51,477][11505] Waiting for process rollout_proc3 to join... |
|
[2025-03-08 09:06:51,478][11505] Waiting for process rollout_proc4 to join... |
|
[2025-03-08 09:06:51,481][11505] Waiting for process rollout_proc5 to join... |
|
[2025-03-08 09:06:51,482][11505] Waiting for process rollout_proc6 to join... |
|
[2025-03-08 09:06:51,483][11505] Waiting for process rollout_proc7 to join... |
|
[2025-03-08 09:06:51,485][11505] Batcher 0 profile tree view: |
|
batching: 0.0553, releasing_batches: 0.0016 |
|
[2025-03-08 09:06:51,486][11505] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0051 |
|
wait_policy_total: 9.5529 |
|
update_model: 0.0284 |
|
weight_update: 0.0013 |
|
one_step: 0.0544 |
|
handle_policy_step: 3.1940 |
|
deserialize: 0.0659, stack: 0.0091, obs_to_device_normalize: 0.6087, forward: 1.9707, send_messages: 0.1000 |
|
prepare_outputs: 0.3658 |
|
to_cpu: 0.2265 |
|
[2025-03-08 09:06:51,487][11505] Learner 0 profile tree view: |
|
misc: 0.0000, prepare_batch: 1.6302 |
|
train: 2.1683 |
|
epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0005, kl_divergence: 0.0231, after_optimizer: 0.0349 |
|
calculate_losses: 0.6816 |
|
losses_init: 0.0000, forward_head: 0.3913, bptt_initial: 0.1901, tail: 0.0410, advantages_returns: 0.0012, losses: 0.0533 |
|
bptt: 0.0041 |
|
bptt_forward_core: 0.0040 |
|
update: 1.4273 |
|
clip: 0.0419 |
|
[2025-03-08 09:06:51,488][11505] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.0003, enqueue_policy_requests: 0.1646, env_step: 0.5075, overhead: 0.0113, complete_rollouts: 0.0000 |
|
save_policy_outputs: 0.0214 |
|
split_output_tensors: 0.0019 |
|
[2025-03-08 09:06:51,489][11505] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.0003, enqueue_policy_requests: 0.0344, env_step: 0.2122, overhead: 0.0014, complete_rollouts: 0.0000 |
|
save_policy_outputs: 0.0286 |
|
split_output_tensors: 0.0007 |
|
[2025-03-08 09:06:51,491][11505] Loop Runner_EvtLoop terminating... |
|
[2025-03-08 09:06:51,492][11505] Runner profile tree view: |
|
main_loop: 40.6727 |
|
[2025-03-08 09:06:51,493][11505] Collected {0: 4014080}, FPS: 201.4 |
|
[2025-03-08 09:06:51,511][11505] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-03-08 09:06:51,512][11505] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-03-08 09:06:51,513][11505] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-03-08 09:06:51,514][11505] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-03-08 09:06:51,516][11505] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-03-08 09:06:51,517][11505] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-03-08 09:06:51,518][11505] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-03-08 09:06:51,519][11505] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-03-08 09:06:51,519][11505] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-03-08 09:06:51,520][11505] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-03-08 09:06:51,521][11505] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-03-08 09:06:51,522][11505] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-03-08 09:06:51,523][11505] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-03-08 09:06:51,523][11505] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-03-08 09:06:51,524][11505] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-03-08 09:06:51,555][11505] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-08 09:06:51,560][11505] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-08 09:06:51,562][11505] RunningMeanStd input shape: (1,) |
|
[2025-03-08 09:06:51,574][11505] ConvEncoder: input_channels=3 |
|
[2025-03-08 09:06:51,681][11505] Conv encoder output size: 512 |
|
[2025-03-08 09:06:51,682][11505] Policy head output size: 512 |
|
[2025-03-08 09:06:51,974][11505] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2025-03-08 09:06:52,781][11505] Num frames 100... |
|
[2025-03-08 09:06:52,971][11505] Num frames 200... |
|
[2025-03-08 09:06:53,147][11505] Num frames 300... |
|
[2025-03-08 09:06:53,318][11505] Num frames 400... |
|
[2025-03-08 09:06:53,502][11505] Num frames 500... |
|
[2025-03-08 09:06:53,676][11505] Num frames 600... |
|
[2025-03-08 09:06:53,843][11505] Num frames 700... |
|
[2025-03-08 09:06:54,031][11505] Num frames 800... |
|
[2025-03-08 09:06:54,206][11505] Num frames 900... |
|
[2025-03-08 09:06:54,390][11505] Num frames 1000... |
|
[2025-03-08 09:06:54,578][11505] Num frames 1100... |
|
[2025-03-08 09:06:54,709][11505] Num frames 1200... |
|
[2025-03-08 09:06:54,838][11505] Num frames 1300... |
|
[2025-03-08 09:06:54,970][11505] Num frames 1400... |
|
[2025-03-08 09:06:55,093][11505] Avg episode rewards: #0: 34.460, true rewards: #0: 14.460 |
|
[2025-03-08 09:06:55,094][11505] Avg episode reward: 34.460, avg true_objective: 14.460 |
|
[2025-03-08 09:06:55,169][11505] Num frames 1500... |
|
[2025-03-08 09:06:55,296][11505] Num frames 1600... |
|
[2025-03-08 09:06:55,425][11505] Num frames 1700... |
|
[2025-03-08 09:06:55,560][11505] Num frames 1800... |
|
[2025-03-08 09:06:55,695][11505] Num frames 1900... |
|
[2025-03-08 09:06:55,830][11505] Num frames 2000... |
|
[2025-03-08 09:06:55,965][11505] Num frames 2100... |
|
[2025-03-08 09:06:56,108][11505] Num frames 2200... |
|
[2025-03-08 09:06:56,244][11505] Num frames 2300... |
|
[2025-03-08 09:06:56,378][11505] Num frames 2400... |
|
[2025-03-08 09:06:56,513][11505] Num frames 2500... |
|
[2025-03-08 09:06:56,648][11505] Num frames 2600... |
|
[2025-03-08 09:06:56,738][11505] Avg episode rewards: #0: 31.625, true rewards: #0: 13.125 |
|
[2025-03-08 09:06:56,739][11505] Avg episode reward: 31.625, avg true_objective: 13.125 |
|
[2025-03-08 09:06:56,838][11505] Num frames 2700... |
|
[2025-03-08 09:06:56,973][11505] Num frames 2800... |
|
[2025-03-08 09:06:57,114][11505] Num frames 2900... |
|
[2025-03-08 09:06:57,249][11505] Num frames 3000... |
|
[2025-03-08 09:06:57,383][11505] Num frames 3100... |
|
[2025-03-08 09:06:57,585][11505] Num frames 3200... |
|
[2025-03-08 09:06:57,754][11505] Num frames 3300... |
|
[2025-03-08 09:06:57,849][11505] Avg episode rewards: #0: 24.763, true rewards: #0: 11.097 |
|
[2025-03-08 09:06:57,850][11505] Avg episode reward: 24.763, avg true_objective: 11.097 |
|
[2025-03-08 09:06:57,943][11505] Num frames 3400... |
|
[2025-03-08 09:06:58,142][11505] Num frames 3500... |
|
[2025-03-08 09:06:58,330][11505] Num frames 3600... |
|
[2025-03-08 09:06:58,465][11505] Num frames 3700... |
|
[2025-03-08 09:06:58,599][11505] Num frames 3800... |
|
[2025-03-08 09:06:58,758][11505] Num frames 3900... |
|
[2025-03-08 09:06:58,988][11505] Num frames 4000... |
|
[2025-03-08 09:06:59,331][11505] Avg episode rewards: #0: 22.493, true rewards: #0: 10.242 |
|
[2025-03-08 09:06:59,334][11505] Avg episode reward: 22.493, avg true_objective: 10.242 |
|
[2025-03-08 09:06:59,342][11505] Num frames 4100... |
|
[2025-03-08 09:06:59,608][11505] Num frames 4200... |
|
[2025-03-08 09:06:59,925][11505] Num frames 4300... |
|
[2025-03-08 09:07:00,144][11505] Num frames 4400... |
|
[2025-03-08 09:07:00,420][11505] Num frames 4500... |
|
[2025-03-08 09:07:00,516][11505] Avg episode rewards: #0: 20.226, true rewards: #0: 9.026 |
|
[2025-03-08 09:07:00,518][11505] Avg episode reward: 20.226, avg true_objective: 9.026 |
|
[2025-03-08 09:07:00,847][11505] Num frames 4600... |
|
[2025-03-08 09:07:01,109][11505] Num frames 4700... |
|
[2025-03-08 09:07:01,350][11505] Num frames 4800... |
|
[2025-03-08 09:07:01,615][11505] Num frames 4900... |
|
[2025-03-08 09:07:01,812][11505] Num frames 5000... |
|
[2025-03-08 09:07:02,075][11505] Num frames 5100... |
|
[2025-03-08 09:07:02,355][11505] Num frames 5200... |
|
[2025-03-08 09:07:02,591][11505] Num frames 5300... |
|
[2025-03-08 09:07:02,798][11505] Num frames 5400... |
|
[2025-03-08 09:07:03,067][11505] Num frames 5500... |
|
[2025-03-08 09:07:03,333][11505] Num frames 5600... |
|
[2025-03-08 09:07:03,595][11505] Num frames 5700... |
|
[2025-03-08 09:07:03,854][11505] Num frames 5800... |
|
[2025-03-08 09:07:04,051][11505] Avg episode rewards: #0: 21.885, true rewards: #0: 9.718 |
|
[2025-03-08 09:07:04,055][11505] Avg episode reward: 21.885, avg true_objective: 9.718 |
|
[2025-03-08 09:07:04,286][11505] Num frames 5900... |
|
[2025-03-08 09:07:04,600][11505] Num frames 6000... |
|
[2025-03-08 09:07:05,086][11505] Num frames 6100... |
|
[2025-03-08 09:07:05,595][11505] Num frames 6200... |
|
[2025-03-08 09:07:06,133][11505] Num frames 6300... |
|
[2025-03-08 09:07:06,560][11505] Num frames 6400... |
|
[2025-03-08 09:07:07,033][11505] Num frames 6500... |
|
[2025-03-08 09:07:07,384][11505] Num frames 6600... |
|
[2025-03-08 09:07:07,527][11505] Num frames 6700... |
|
[2025-03-08 09:07:07,663][11505] Num frames 6800... |
|
[2025-03-08 09:07:07,797][11505] Num frames 6900... |
|
[2025-03-08 09:07:07,929][11505] Num frames 7000... |
|
[2025-03-08 09:07:08,066][11505] Num frames 7100... |
|
[2025-03-08 09:07:08,200][11505] Num frames 7200... |
|
[2025-03-08 09:07:08,265][11505] Avg episode rewards: #0: 23.439, true rewards: #0: 10.296 |
|
[2025-03-08 09:07:08,266][11505] Avg episode reward: 23.439, avg true_objective: 10.296 |
|
[2025-03-08 09:07:08,389][11505] Num frames 7300... |
|
[2025-03-08 09:07:08,530][11505] Num frames 7400... |
|
[2025-03-08 09:07:08,666][11505] Num frames 7500... |
|
[2025-03-08 09:07:08,803][11505] Num frames 7600... |
|
[2025-03-08 09:07:08,938][11505] Num frames 7700... |
|
[2025-03-08 09:07:09,075][11505] Num frames 7800... |
|
[2025-03-08 09:07:09,211][11505] Num frames 7900... |
|
[2025-03-08 09:07:09,344][11505] Num frames 8000... |
|
[2025-03-08 09:07:09,478][11505] Num frames 8100... |
|
[2025-03-08 09:07:09,623][11505] Num frames 8200... |
|
[2025-03-08 09:07:09,757][11505] Num frames 8300... |
|
[2025-03-08 09:07:09,890][11505] Num frames 8400... |
|
[2025-03-08 09:07:10,026][11505] Num frames 8500... |
|
[2025-03-08 09:07:10,210][11505] Avg episode rewards: #0: 25.118, true rewards: #0: 10.742 |
|
[2025-03-08 09:07:10,211][11505] Avg episode reward: 25.118, avg true_objective: 10.742 |
|
[2025-03-08 09:07:10,222][11505] Num frames 8600... |
|
[2025-03-08 09:07:10,353][11505] Num frames 8700... |
|
[2025-03-08 09:07:10,490][11505] Num frames 8800... |
|
[2025-03-08 09:07:10,636][11505] Num frames 8900... |
|
[2025-03-08 09:07:10,766][11505] Num frames 9000... |
|
[2025-03-08 09:07:10,895][11505] Num frames 9100... |
|
[2025-03-08 09:07:11,026][11505] Num frames 9200... |
|
[2025-03-08 09:07:11,213][11505] Avg episode rewards: #0: 23.776, true rewards: #0: 10.331 |
|
[2025-03-08 09:07:11,214][11505] Avg episode reward: 23.776, avg true_objective: 10.331 |
|
[2025-03-08 09:07:11,219][11505] Num frames 9300... |
|
[2025-03-08 09:07:11,350][11505] Num frames 9400... |
|
[2025-03-08 09:07:11,483][11505] Num frames 9500... |
|
[2025-03-08 09:07:11,628][11505] Num frames 9600... |
|
[2025-03-08 09:07:11,761][11505] Num frames 9700... |
|
[2025-03-08 09:07:11,892][11505] Num frames 9800... |
|
[2025-03-08 09:07:12,025][11505] Num frames 9900... |
|
[2025-03-08 09:07:12,162][11505] Num frames 10000... |
|
[2025-03-08 09:07:12,292][11505] Num frames 10100... |
|
[2025-03-08 09:07:12,425][11505] Num frames 10200... |
|
[2025-03-08 09:07:12,557][11505] Num frames 10300... |
|
[2025-03-08 09:07:12,645][11505] Avg episode rewards: #0: 23.522, true rewards: #0: 10.322 |
|
[2025-03-08 09:07:12,646][11505] Avg episode reward: 23.522, avg true_objective: 10.322 |
|
[2025-03-08 09:08:12,926][11505] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-03-08 09:08:36,441][11505] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-03-08 09:08:36,442][11505] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-03-08 09:08:36,443][11505] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-03-08 09:08:36,444][11505] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-03-08 09:08:36,445][11505] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-03-08 09:08:36,446][11505] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-03-08 09:08:36,447][11505] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-03-08 09:08:36,448][11505] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-03-08 09:08:36,448][11505] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-03-08 09:08:36,449][11505] Adding new argument 'hf_repository'='Dove667/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-03-08 09:08:36,450][11505] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-03-08 09:08:36,451][11505] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-03-08 09:08:36,452][11505] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-03-08 09:08:36,452][11505] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-03-08 09:08:36,453][11505] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-03-08 09:08:36,481][11505] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-08 09:08:36,482][11505] RunningMeanStd input shape: (1,) |
|
[2025-03-08 09:08:36,494][11505] ConvEncoder: input_channels=3 |
|
[2025-03-08 09:08:36,529][11505] Conv encoder output size: 512 |
|
[2025-03-08 09:08:36,529][11505] Policy head output size: 512 |
|
[2025-03-08 09:08:36,548][11505] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2025-03-08 09:08:36,987][11505] Num frames 100... |
|
[2025-03-08 09:08:37,115][11505] Num frames 200... |
|
[2025-03-08 09:08:37,264][11505] Num frames 300... |
|
[2025-03-08 09:08:37,401][11505] Num frames 400... |
|
[2025-03-08 09:08:37,531][11505] Num frames 500... |
|
[2025-03-08 09:08:37,661][11505] Num frames 600... |
|
[2025-03-08 09:08:37,792][11505] Num frames 700... |
|
[2025-03-08 09:08:37,924][11505] Num frames 800... |
|
[2025-03-08 09:08:38,060][11505] Num frames 900... |
|
[2025-03-08 09:08:38,194][11505] Avg episode rewards: #0: 19.600, true rewards: #0: 9.600 |
|
[2025-03-08 09:08:38,195][11505] Avg episode reward: 19.600, avg true_objective: 9.600 |
|
[2025-03-08 09:08:38,251][11505] Num frames 1000... |
|
[2025-03-08 09:08:38,381][11505] Num frames 1100... |
|
[2025-03-08 09:08:38,512][11505] Num frames 1200... |
|
[2025-03-08 09:08:38,645][11505] Num frames 1300... |
|
[2025-03-08 09:08:38,773][11505] Num frames 1400... |
|
[2025-03-08 09:08:38,907][11505] Num frames 1500... |
|
[2025-03-08 09:08:39,039][11505] Num frames 1600... |
|
[2025-03-08 09:08:39,174][11505] Num frames 1700... |
|
[2025-03-08 09:08:39,312][11505] Num frames 1800... |
|
[2025-03-08 09:08:39,440][11505] Num frames 1900... |
|
[2025-03-08 09:08:39,570][11505] Num frames 2000... |
|
[2025-03-08 09:08:39,698][11505] Num frames 2100... |
|
[2025-03-08 09:08:39,833][11505] Num frames 2200... |
|
[2025-03-08 09:08:39,964][11505] Avg episode rewards: #0: 23.700, true rewards: #0: 11.200 |
|
[2025-03-08 09:08:39,965][11505] Avg episode reward: 23.700, avg true_objective: 11.200 |
|
[2025-03-08 09:08:40,145][11505] Num frames 2300... |
|
[2025-03-08 09:08:40,354][11505] Num frames 2400... |
|
[2025-03-08 09:08:40,547][11505] Num frames 2500... |
|
[2025-03-08 09:08:40,685][11505] Num frames 2600... |
|
[2025-03-08 09:08:40,822][11505] Num frames 2700... |
|
[2025-03-08 09:08:40,960][11505] Num frames 2800... |
|
[2025-03-08 09:08:41,101][11505] Num frames 2900... |
|
[2025-03-08 09:08:41,240][11505] Num frames 3000... |
|
[2025-03-08 09:08:41,315][11505] Avg episode rewards: #0: 20.367, true rewards: #0: 10.033 |
|
[2025-03-08 09:08:41,316][11505] Avg episode reward: 20.367, avg true_objective: 10.033 |
|
[2025-03-08 09:08:41,442][11505] Num frames 3100... |
|
[2025-03-08 09:08:41,576][11505] Num frames 3200... |
|
[2025-03-08 09:08:41,717][11505] Num frames 3300... |
|
[2025-03-08 09:08:41,898][11505] Num frames 3400... |
|
[2025-03-08 09:08:42,080][11505] Num frames 3500... |
|
[2025-03-08 09:08:42,254][11505] Num frames 3600... |
|
[2025-03-08 09:08:42,440][11505] Num frames 3700... |
|
[2025-03-08 09:08:42,660][11505] Avg episode rewards: #0: 19.233, true rewards: #0: 9.482 |
|
[2025-03-08 09:08:42,661][11505] Avg episode reward: 19.233, avg true_objective: 9.482 |
|
[2025-03-08 09:08:42,677][11505] Num frames 3800... |
|
[2025-03-08 09:08:42,844][11505] Num frames 3900... |
|
[2025-03-08 09:08:43,039][11505] Num frames 4000... |
|
[2025-03-08 09:08:43,251][11505] Num frames 4100... |
|
[2025-03-08 09:08:43,476][11505] Num frames 4200... |
|
[2025-03-08 09:08:43,687][11505] Num frames 4300... |
|
[2025-03-08 09:08:43,886][11505] Num frames 4400... |
|
[2025-03-08 09:08:44,116][11505] Num frames 4500... |
|
[2025-03-08 09:08:44,234][11505] Avg episode rewards: #0: 17.858, true rewards: #0: 9.058 |
|
[2025-03-08 09:08:44,235][11505] Avg episode reward: 17.858, avg true_objective: 9.058 |
|
[2025-03-08 09:08:44,389][11505] Num frames 4600... |
|
[2025-03-08 09:08:44,851][11505] Num frames 4700... |
|
[2025-03-08 09:08:45,174][11505] Num frames 4800... |
|
[2025-03-08 09:08:45,355][11505] Num frames 4900... |
|
[2025-03-08 09:08:45,668][11505] Num frames 5000... |
|
[2025-03-08 09:08:46,046][11505] Num frames 5100... |
|
[2025-03-08 09:08:46,184][11505] Num frames 5200... |
|
[2025-03-08 09:08:46,320][11505] Num frames 5300... |
|
[2025-03-08 09:08:46,455][11505] Num frames 5400... |
|
[2025-03-08 09:08:46,747][11505] Num frames 5500... |
|
[2025-03-08 09:08:46,920][11505] Avg episode rewards: #0: 18.805, true rewards: #0: 9.305 |
|
[2025-03-08 09:08:46,921][11505] Avg episode reward: 18.805, avg true_objective: 9.305 |
|
[2025-03-08 09:08:46,946][11505] Num frames 5600... |
|
[2025-03-08 09:08:47,079][11505] Num frames 5700... |
|
[2025-03-08 09:08:47,212][11505] Num frames 5800... |
|
[2025-03-08 09:08:47,342][11505] Num frames 5900... |
|
[2025-03-08 09:08:47,477][11505] Num frames 6000... |
|
[2025-03-08 09:08:47,614][11505] Num frames 6100... |
|
[2025-03-08 09:08:47,745][11505] Num frames 6200... |
|
[2025-03-08 09:08:47,877][11505] Num frames 6300... |
|
[2025-03-08 09:08:48,009][11505] Num frames 6400... |
|
[2025-03-08 09:08:48,143][11505] Num frames 6500... |
|
[2025-03-08 09:08:48,273][11505] Num frames 6600... |
|
[2025-03-08 09:08:48,405][11505] Num frames 6700... |
|
[2025-03-08 09:08:48,542][11505] Num frames 6800... |
|
[2025-03-08 09:08:48,686][11505] Num frames 6900... |
|
[2025-03-08 09:08:48,816][11505] Num frames 7000... |
|
[2025-03-08 09:08:48,887][11505] Avg episode rewards: #0: 21.159, true rewards: #0: 10.016 |
|
[2025-03-08 09:08:48,888][11505] Avg episode reward: 21.159, avg true_objective: 10.016 |
|
[2025-03-08 09:08:49,007][11505] Num frames 7100... |
|
[2025-03-08 09:08:49,145][11505] Num frames 7200... |
|
[2025-03-08 09:08:49,275][11505] Num frames 7300... |
|
[2025-03-08 09:08:49,405][11505] Num frames 7400... |
|
[2025-03-08 09:08:49,536][11505] Num frames 7500... |
|
[2025-03-08 09:08:49,673][11505] Num frames 7600... |
|
[2025-03-08 09:08:49,806][11505] Num frames 7700... |
|
[2025-03-08 09:08:49,936][11505] Num frames 7800... |
|
[2025-03-08 09:08:50,069][11505] Num frames 7900... |
|
[2025-03-08 09:08:50,204][11505] Num frames 8000... |
|
[2025-03-08 09:08:50,333][11505] Num frames 8100... |
|
[2025-03-08 09:08:50,463][11505] Num frames 8200... |
|
[2025-03-08 09:08:50,594][11505] Num frames 8300... |
|
[2025-03-08 09:08:50,735][11505] Num frames 8400... |
|
[2025-03-08 09:08:50,794][11505] Avg episode rewards: #0: 22.753, true rewards: #0: 10.502 |
|
[2025-03-08 09:08:50,795][11505] Avg episode reward: 22.753, avg true_objective: 10.502 |
|
[2025-03-08 09:08:50,921][11505] Num frames 8500... |
|
[2025-03-08 09:08:51,052][11505] Num frames 8600... |
|
[2025-03-08 09:08:51,185][11505] Num frames 8700... |
|
[2025-03-08 09:08:51,314][11505] Num frames 8800... |
|
[2025-03-08 09:08:51,446][11505] Num frames 8900... |
|
[2025-03-08 09:08:51,563][11505] Avg episode rewards: #0: 21.051, true rewards: #0: 9.940 |
|
[2025-03-08 09:08:51,564][11505] Avg episode reward: 21.051, avg true_objective: 9.940 |
|
[2025-03-08 09:08:51,642][11505] Num frames 9000... |
|
[2025-03-08 09:08:51,778][11505] Num frames 9100... |
|
[2025-03-08 09:08:51,911][11505] Num frames 9200... |
|
[2025-03-08 09:08:52,046][11505] Num frames 9300... |
|
[2025-03-08 09:08:52,182][11505] Num frames 9400... |
|
[2025-03-08 09:08:52,312][11505] Num frames 9500... |
|
[2025-03-08 09:08:52,446][11505] Num frames 9600... |
|
[2025-03-08 09:08:52,581][11505] Num frames 9700... |
|
[2025-03-08 09:08:52,723][11505] Num frames 9800... |
|
[2025-03-08 09:08:52,855][11505] Num frames 9900... |
|
[2025-03-08 09:08:53,004][11505] Num frames 10000... |
|
[2025-03-08 09:08:53,169][11505] Avg episode rewards: #0: 21.266, true rewards: #0: 10.066 |
|
[2025-03-08 09:08:53,170][11505] Avg episode reward: 21.266, avg true_objective: 10.066 |
|
[2025-03-08 09:09:51,313][11505] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|