|
[2025-02-16 14:37:02,074][02958] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-02-16 14:37:02,076][02958] Rollout worker 0 uses device cpu |
|
[2025-02-16 14:37:02,078][02958] Rollout worker 1 uses device cpu |
|
[2025-02-16 14:37:02,079][02958] Rollout worker 2 uses device cpu |
|
[2025-02-16 14:37:02,080][02958] Rollout worker 3 uses device cpu |
|
[2025-02-16 14:37:02,081][02958] Rollout worker 4 uses device cpu |
|
[2025-02-16 14:37:02,082][02958] Rollout worker 5 uses device cpu |
|
[2025-02-16 14:37:02,084][02958] Rollout worker 6 uses device cpu |
|
[2025-02-16 14:37:02,085][02958] Rollout worker 7 uses device cpu |
|
[2025-02-16 14:37:02,240][02958] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-16 14:37:02,242][02958] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-02-16 14:37:02,278][02958] Starting all processes... |
|
[2025-02-16 14:37:02,280][02958] Starting process learner_proc0 |
|
[2025-02-16 14:37:02,346][02958] Starting all processes... |
|
[2025-02-16 14:37:02,354][02958] Starting process inference_proc0-0 |
|
[2025-02-16 14:37:02,355][02958] Starting process rollout_proc0 |
|
[2025-02-16 14:37:02,358][02958] Starting process rollout_proc1 |
|
[2025-02-16 14:37:02,358][02958] Starting process rollout_proc2 |
|
[2025-02-16 14:37:02,358][02958] Starting process rollout_proc3 |
|
[2025-02-16 14:37:02,358][02958] Starting process rollout_proc4 |
|
[2025-02-16 14:37:02,358][02958] Starting process rollout_proc5 |
|
[2025-02-16 14:37:02,358][02958] Starting process rollout_proc6 |
|
[2025-02-16 14:37:02,358][02958] Starting process rollout_proc7 |
|
[2025-02-16 14:37:18,164][05225] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-16 14:37:18,164][05225] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-02-16 14:37:18,285][05225] Num visible devices: 1 |
|
[2025-02-16 14:37:18,351][05225] Starting seed is not provided |
|
[2025-02-16 14:37:18,354][05225] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-16 14:37:18,354][05225] Initializing actor-critic model on device cuda:0 |
|
[2025-02-16 14:37:18,355][05225] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-16 14:37:18,359][05225] RunningMeanStd input shape: (1,) |
|
[2025-02-16 14:37:18,393][05225] ConvEncoder: input_channels=3 |
|
[2025-02-16 14:37:18,405][05245] Worker 6 uses CPU cores [0] |
|
[2025-02-16 14:37:18,587][05243] Worker 3 uses CPU cores [1] |
|
[2025-02-16 14:37:18,930][05244] Worker 5 uses CPU cores [1] |
|
[2025-02-16 14:37:19,045][05238] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-16 14:37:19,049][05238] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-02-16 14:37:19,140][05239] Worker 0 uses CPU cores [0] |
|
[2025-02-16 14:37:19,145][05242] Worker 4 uses CPU cores [0] |
|
[2025-02-16 14:37:19,155][05240] Worker 1 uses CPU cores [1] |
|
[2025-02-16 14:37:19,163][05238] Num visible devices: 1 |
|
[2025-02-16 14:37:19,204][05246] Worker 7 uses CPU cores [1] |
|
[2025-02-16 14:37:19,224][05225] Conv encoder output size: 512 |
|
[2025-02-16 14:37:19,226][05225] Policy head output size: 512 |
|
[2025-02-16 14:37:19,226][05241] Worker 2 uses CPU cores [0] |
|
[2025-02-16 14:37:19,285][05225] Created Actor Critic model with architecture: |
|
[2025-02-16 14:37:19,285][05225] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-02-16 14:37:19,611][05225] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-02-16 14:37:22,240][02958] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-02-16 14:37:22,250][02958] Heartbeat connected on RolloutWorker_w0 |
|
[2025-02-16 14:37:22,253][02958] Heartbeat connected on RolloutWorker_w1 |
|
[2025-02-16 14:37:22,256][02958] Heartbeat connected on RolloutWorker_w2 |
|
[2025-02-16 14:37:22,261][02958] Heartbeat connected on RolloutWorker_w3 |
|
[2025-02-16 14:37:22,267][02958] Heartbeat connected on RolloutWorker_w4 |
|
[2025-02-16 14:37:22,271][02958] Heartbeat connected on RolloutWorker_w5 |
|
[2025-02-16 14:37:22,274][02958] Heartbeat connected on RolloutWorker_w6 |
|
[2025-02-16 14:37:22,278][02958] Heartbeat connected on RolloutWorker_w7 |
|
[2025-02-16 14:37:22,356][02958] Heartbeat connected on Batcher_0 |
|
[2025-02-16 14:37:23,709][05225] No checkpoints found |
|
[2025-02-16 14:37:23,709][05225] Did not load from checkpoint, starting from scratch! |
|
[2025-02-16 14:37:23,710][05225] Initialized policy 0 weights for model version 0 |
|
[2025-02-16 14:37:23,713][05225] LearnerWorker_p0 finished initialization! |
|
[2025-02-16 14:37:23,713][02958] Heartbeat connected on LearnerWorker_p0 |
|
[2025-02-16 14:37:23,716][05225] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-16 14:37:23,957][05238] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-16 14:37:23,958][05238] RunningMeanStd input shape: (1,) |
|
[2025-02-16 14:37:23,969][05238] ConvEncoder: input_channels=3 |
|
[2025-02-16 14:37:24,069][05238] Conv encoder output size: 512 |
|
[2025-02-16 14:37:24,070][05238] Policy head output size: 512 |
|
[2025-02-16 14:37:24,108][02958] Inference worker 0-0 is ready! |
|
[2025-02-16 14:37:24,110][02958] All inference workers are ready! Signal rollout workers to start! |
|
[2025-02-16 14:37:24,381][05244] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 14:37:24,384][05246] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 14:37:24,395][05239] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 14:37:24,408][05241] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 14:37:24,457][05240] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 14:37:24,465][05243] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 14:37:24,600][05242] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 14:37:24,612][05245] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 14:37:25,795][05241] Decorrelating experience for 0 frames... |
|
[2025-02-16 14:37:25,795][05245] Decorrelating experience for 0 frames... |
|
[2025-02-16 14:37:25,941][05240] Decorrelating experience for 0 frames... |
|
[2025-02-16 14:37:25,944][05244] Decorrelating experience for 0 frames... |
|
[2025-02-16 14:37:25,948][05246] Decorrelating experience for 0 frames... |
|
[2025-02-16 14:37:25,951][05243] Decorrelating experience for 0 frames... |
|
[2025-02-16 14:37:27,074][05240] Decorrelating experience for 32 frames... |
|
[2025-02-16 14:37:27,076][05244] Decorrelating experience for 32 frames... |
|
[2025-02-16 14:37:27,078][05243] Decorrelating experience for 32 frames... |
|
[2025-02-16 14:37:27,086][05241] Decorrelating experience for 32 frames... |
|
[2025-02-16 14:37:27,094][05245] Decorrelating experience for 32 frames... |
|
[2025-02-16 14:37:27,141][05239] Decorrelating experience for 0 frames... |
|
[2025-02-16 14:37:27,221][02958] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-02-16 14:37:28,087][05246] Decorrelating experience for 32 frames... |
|
[2025-02-16 14:37:28,320][05240] Decorrelating experience for 64 frames... |
|
[2025-02-16 14:37:28,504][05242] Decorrelating experience for 0 frames... |
|
[2025-02-16 14:37:28,515][05239] Decorrelating experience for 32 frames... |
|
[2025-02-16 14:37:28,866][05245] Decorrelating experience for 64 frames... |
|
[2025-02-16 14:37:29,107][05246] Decorrelating experience for 64 frames... |
|
[2025-02-16 14:37:29,543][05241] Decorrelating experience for 64 frames... |
|
[2025-02-16 14:37:29,638][05244] Decorrelating experience for 64 frames... |
|
[2025-02-16 14:37:30,295][05245] Decorrelating experience for 96 frames... |
|
[2025-02-16 14:37:30,538][05242] Decorrelating experience for 32 frames... |
|
[2025-02-16 14:37:30,821][05246] Decorrelating experience for 96 frames... |
|
[2025-02-16 14:37:31,600][05243] Decorrelating experience for 64 frames... |
|
[2025-02-16 14:37:31,707][05244] Decorrelating experience for 96 frames... |
|
[2025-02-16 14:37:32,221][02958] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-02-16 14:37:32,679][05240] Decorrelating experience for 96 frames... |
|
[2025-02-16 14:37:33,138][05242] Decorrelating experience for 64 frames... |
|
[2025-02-16 14:37:34,033][05243] Decorrelating experience for 96 frames... |
|
[2025-02-16 14:37:35,256][05239] Decorrelating experience for 64 frames... |
|
[2025-02-16 14:37:35,893][05242] Decorrelating experience for 96 frames... |
|
[2025-02-16 14:37:36,279][05225] Signal inference workers to stop experience collection... |
|
[2025-02-16 14:37:36,346][05238] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-02-16 14:37:36,745][05241] Decorrelating experience for 96 frames... |
|
[2025-02-16 14:37:36,789][05239] Decorrelating experience for 96 frames... |
|
[2025-02-16 14:37:37,221][02958] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 226.0. Samples: 2260. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-02-16 14:37:37,223][02958] Avg episode reward: [(0, '3.030')] |
|
[2025-02-16 14:37:38,019][05225] Signal inference workers to resume experience collection... |
|
[2025-02-16 14:37:38,021][05238] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-02-16 14:37:42,221][02958] Fps is (10 sec: 2457.6, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 24576. Throughput: 0: 472.3. Samples: 7084. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:37:42,225][02958] Avg episode reward: [(0, '3.730')] |
|
[2025-02-16 14:37:46,983][05238] Updated weights for policy 0, policy_version 10 (0.0099) |
|
[2025-02-16 14:37:47,221][02958] Fps is (10 sec: 4096.0, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 40960. Throughput: 0: 489.5. Samples: 9790. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:37:47,223][02958] Avg episode reward: [(0, '4.051')] |
|
[2025-02-16 14:37:52,221][02958] Fps is (10 sec: 3276.7, 60 sec: 2293.7, 300 sec: 2293.7). Total num frames: 57344. Throughput: 0: 561.4. Samples: 14034. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:37:52,225][02958] Avg episode reward: [(0, '4.501')] |
|
[2025-02-16 14:37:57,144][05238] Updated weights for policy 0, policy_version 20 (0.0018) |
|
[2025-02-16 14:37:57,221][02958] Fps is (10 sec: 4096.0, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 81920. Throughput: 0: 700.1. Samples: 21004. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:37:57,226][02958] Avg episode reward: [(0, '4.340')] |
|
[2025-02-16 14:38:02,224][02958] Fps is (10 sec: 4094.9, 60 sec: 2808.5, 300 sec: 2808.5). Total num frames: 98304. Throughput: 0: 683.0. Samples: 23906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:38:02,226][02958] Avg episode reward: [(0, '4.218')] |
|
[2025-02-16 14:38:07,221][02958] Fps is (10 sec: 3686.4, 60 sec: 2969.6, 300 sec: 2969.6). Total num frames: 118784. Throughput: 0: 729.3. Samples: 29172. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:38:07,223][02958] Avg episode reward: [(0, '4.379')] |
|
[2025-02-16 14:38:07,236][05225] Saving new best policy, reward=4.379! |
|
[2025-02-16 14:38:08,138][05238] Updated weights for policy 0, policy_version 30 (0.0016) |
|
[2025-02-16 14:38:12,223][02958] Fps is (10 sec: 4096.4, 60 sec: 3094.6, 300 sec: 3094.6). Total num frames: 139264. Throughput: 0: 798.6. Samples: 35938. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:38:12,227][02958] Avg episode reward: [(0, '4.536')] |
|
[2025-02-16 14:38:12,231][05225] Saving new best policy, reward=4.536! |
|
[2025-02-16 14:38:17,221][02958] Fps is (10 sec: 3686.2, 60 sec: 3112.9, 300 sec: 3112.9). Total num frames: 155648. Throughput: 0: 856.9. Samples: 38560. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:38:17,224][02958] Avg episode reward: [(0, '4.629')] |
|
[2025-02-16 14:38:17,232][05225] Saving new best policy, reward=4.629! |
|
[2025-02-16 14:38:19,202][05238] Updated weights for policy 0, policy_version 40 (0.0020) |
|
[2025-02-16 14:38:22,221][02958] Fps is (10 sec: 3687.1, 60 sec: 3202.3, 300 sec: 3202.3). Total num frames: 176128. Throughput: 0: 921.5. Samples: 43728. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:38:22,222][02958] Avg episode reward: [(0, '4.522')] |
|
[2025-02-16 14:38:27,221][02958] Fps is (10 sec: 4096.2, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 196608. Throughput: 0: 963.9. Samples: 50460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:38:27,223][02958] Avg episode reward: [(0, '4.487')] |
|
[2025-02-16 14:38:28,391][05238] Updated weights for policy 0, policy_version 50 (0.0015) |
|
[2025-02-16 14:38:32,221][02958] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3276.8). Total num frames: 212992. Throughput: 0: 959.2. Samples: 52956. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:38:32,225][02958] Avg episode reward: [(0, '4.371')] |
|
[2025-02-16 14:38:37,221][02958] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3335.3). Total num frames: 233472. Throughput: 0: 988.8. Samples: 58530. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:38:37,223][02958] Avg episode reward: [(0, '4.265')] |
|
[2025-02-16 14:38:39,281][05238] Updated weights for policy 0, policy_version 60 (0.0043) |
|
[2025-02-16 14:38:42,221][02958] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3440.6). Total num frames: 258048. Throughput: 0: 985.1. Samples: 65332. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:38:42,225][02958] Avg episode reward: [(0, '4.579')] |
|
[2025-02-16 14:38:47,228][02958] Fps is (10 sec: 3683.8, 60 sec: 3822.5, 300 sec: 3378.9). Total num frames: 270336. Throughput: 0: 972.8. Samples: 67684. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:38:47,232][02958] Avg episode reward: [(0, '4.533')] |
|
[2025-02-16 14:38:50,540][05238] Updated weights for policy 0, policy_version 70 (0.0016) |
|
[2025-02-16 14:38:52,221][02958] Fps is (10 sec: 3276.9, 60 sec: 3891.2, 300 sec: 3421.4). Total num frames: 290816. Throughput: 0: 973.0. Samples: 72958. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:38:52,223][02958] Avg episode reward: [(0, '4.392')] |
|
[2025-02-16 14:38:57,221][02958] Fps is (10 sec: 4508.7, 60 sec: 3891.2, 300 sec: 3504.4). Total num frames: 315392. Throughput: 0: 983.2. Samples: 80182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:38:57,227][02958] Avg episode reward: [(0, '4.289')] |
|
[2025-02-16 14:38:57,234][05225] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000077_315392.pth... |
|
[2025-02-16 14:39:00,169][05238] Updated weights for policy 0, policy_version 80 (0.0017) |
|
[2025-02-16 14:39:02,221][02958] Fps is (10 sec: 4096.0, 60 sec: 3891.4, 300 sec: 3492.4). Total num frames: 331776. Throughput: 0: 973.0. Samples: 82344. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:39:02,223][02958] Avg episode reward: [(0, '4.376')] |
|
[2025-02-16 14:39:07,221][02958] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3563.5). Total num frames: 356352. Throughput: 0: 994.7. Samples: 88490. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:39:07,222][02958] Avg episode reward: [(0, '4.329')] |
|
[2025-02-16 14:39:09,840][05238] Updated weights for policy 0, policy_version 90 (0.0029) |
|
[2025-02-16 14:39:12,223][02958] Fps is (10 sec: 4504.7, 60 sec: 3959.5, 300 sec: 3588.8). Total num frames: 376832. Throughput: 0: 999.7. Samples: 95450. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:39:12,228][02958] Avg episode reward: [(0, '4.312')] |
|
[2025-02-16 14:39:17,221][02958] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 3574.7). Total num frames: 393216. Throughput: 0: 990.2. Samples: 97514. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:39:17,225][02958] Avg episode reward: [(0, '4.508')] |
|
[2025-02-16 14:39:20,505][05238] Updated weights for policy 0, policy_version 100 (0.0012) |
|
[2025-02-16 14:39:22,221][02958] Fps is (10 sec: 4096.8, 60 sec: 4027.7, 300 sec: 3633.0). Total num frames: 417792. Throughput: 0: 1004.9. Samples: 103750. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:39:22,225][02958] Avg episode reward: [(0, '4.592')] |
|
[2025-02-16 14:39:27,221][02958] Fps is (10 sec: 4505.7, 60 sec: 4027.7, 300 sec: 3652.3). Total num frames: 438272. Throughput: 0: 1002.9. Samples: 110460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:39:27,223][02958] Avg episode reward: [(0, '4.747')] |
|
[2025-02-16 14:39:27,229][05225] Saving new best policy, reward=4.747! |
|
[2025-02-16 14:39:31,204][05238] Updated weights for policy 0, policy_version 110 (0.0027) |
|
[2025-02-16 14:39:32,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3637.3). Total num frames: 454656. Throughput: 0: 992.4. Samples: 112336. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-16 14:39:32,227][02958] Avg episode reward: [(0, '4.720')] |
|
[2025-02-16 14:39:37,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3654.9). Total num frames: 475136. Throughput: 0: 1024.4. Samples: 119056. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:39:37,230][02958] Avg episode reward: [(0, '4.658')] |
|
[2025-02-16 14:39:39,832][05238] Updated weights for policy 0, policy_version 120 (0.0015) |
|
[2025-02-16 14:39:42,223][02958] Fps is (10 sec: 4504.7, 60 sec: 4027.6, 300 sec: 3701.5). Total num frames: 499712. Throughput: 0: 1007.6. Samples: 125526. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:39:42,230][02958] Avg episode reward: [(0, '4.561')] |
|
[2025-02-16 14:39:47,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4096.5, 300 sec: 3686.4). Total num frames: 516096. Throughput: 0: 1006.3. Samples: 127628. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:39:47,223][02958] Avg episode reward: [(0, '4.729')] |
|
[2025-02-16 14:39:50,551][05238] Updated weights for policy 0, policy_version 130 (0.0023) |
|
[2025-02-16 14:39:52,221][02958] Fps is (10 sec: 3687.0, 60 sec: 4096.0, 300 sec: 3700.5). Total num frames: 536576. Throughput: 0: 1019.3. Samples: 134360. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:39:52,229][02958] Avg episode reward: [(0, '4.803')] |
|
[2025-02-16 14:39:52,282][05225] Saving new best policy, reward=4.803! |
|
[2025-02-16 14:39:57,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3713.7). Total num frames: 557056. Throughput: 0: 1003.5. Samples: 140606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:39:57,227][02958] Avg episode reward: [(0, '4.747')] |
|
[2025-02-16 14:40:01,267][05238] Updated weights for policy 0, policy_version 140 (0.0019) |
|
[2025-02-16 14:40:02,221][02958] Fps is (10 sec: 4096.1, 60 sec: 4096.0, 300 sec: 3726.0). Total num frames: 577536. Throughput: 0: 1003.2. Samples: 142658. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:40:02,227][02958] Avg episode reward: [(0, '4.739')] |
|
[2025-02-16 14:40:07,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3737.6). Total num frames: 598016. Throughput: 0: 1018.1. Samples: 149566. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:40:07,226][02958] Avg episode reward: [(0, '4.630')] |
|
[2025-02-16 14:40:09,986][05238] Updated weights for policy 0, policy_version 150 (0.0014) |
|
[2025-02-16 14:40:12,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4027.9, 300 sec: 3748.5). Total num frames: 618496. Throughput: 0: 1004.7. Samples: 155670. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:40:12,227][02958] Avg episode reward: [(0, '4.814')] |
|
[2025-02-16 14:40:12,234][05225] Saving new best policy, reward=4.814! |
|
[2025-02-16 14:40:17,223][02958] Fps is (10 sec: 4094.9, 60 sec: 4095.8, 300 sec: 3758.6). Total num frames: 638976. Throughput: 0: 1011.4. Samples: 157852. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:40:17,232][02958] Avg episode reward: [(0, '4.801')] |
|
[2025-02-16 14:40:20,742][05238] Updated weights for policy 0, policy_version 160 (0.0012) |
|
[2025-02-16 14:40:22,221][02958] Fps is (10 sec: 4095.8, 60 sec: 4027.7, 300 sec: 3768.3). Total num frames: 659456. Throughput: 0: 1017.9. Samples: 164864. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:40:22,224][02958] Avg episode reward: [(0, '4.742')] |
|
[2025-02-16 14:40:27,221][02958] Fps is (10 sec: 4097.0, 60 sec: 4027.7, 300 sec: 3777.4). Total num frames: 679936. Throughput: 0: 1010.5. Samples: 170998. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:40:27,227][02958] Avg episode reward: [(0, '4.904')] |
|
[2025-02-16 14:40:27,235][05225] Saving new best policy, reward=4.904! |
|
[2025-02-16 14:40:31,219][05238] Updated weights for policy 0, policy_version 170 (0.0022) |
|
[2025-02-16 14:40:32,221][02958] Fps is (10 sec: 4096.1, 60 sec: 4096.0, 300 sec: 3786.0). Total num frames: 700416. Throughput: 0: 1018.5. Samples: 173462. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:40:32,223][02958] Avg episode reward: [(0, '5.062')] |
|
[2025-02-16 14:40:32,228][05225] Saving new best policy, reward=5.062! |
|
[2025-02-16 14:40:37,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4096.0, 300 sec: 3794.2). Total num frames: 720896. Throughput: 0: 1023.9. Samples: 180436. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:40:37,228][02958] Avg episode reward: [(0, '4.842')] |
|
[2025-02-16 14:40:40,445][05238] Updated weights for policy 0, policy_version 180 (0.0021) |
|
[2025-02-16 14:40:42,222][02958] Fps is (10 sec: 4095.7, 60 sec: 4027.8, 300 sec: 3801.9). Total num frames: 741376. Throughput: 0: 1010.2. Samples: 186066. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:40:42,227][02958] Avg episode reward: [(0, '4.941')] |
|
[2025-02-16 14:40:47,221][02958] Fps is (10 sec: 4096.1, 60 sec: 4096.0, 300 sec: 3809.3). Total num frames: 761856. Throughput: 0: 1023.5. Samples: 188716. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-16 14:40:47,228][02958] Avg episode reward: [(0, '5.103')] |
|
[2025-02-16 14:40:47,234][05225] Saving new best policy, reward=5.103! |
|
[2025-02-16 14:40:50,843][05238] Updated weights for policy 0, policy_version 190 (0.0017) |
|
[2025-02-16 14:40:52,221][02958] Fps is (10 sec: 4096.3, 60 sec: 4096.0, 300 sec: 3816.3). Total num frames: 782336. Throughput: 0: 1018.7. Samples: 195408. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:40:52,227][02958] Avg episode reward: [(0, '5.297')] |
|
[2025-02-16 14:40:52,231][05225] Saving new best policy, reward=5.297! |
|
[2025-02-16 14:40:57,222][02958] Fps is (10 sec: 3685.8, 60 sec: 4027.6, 300 sec: 3803.4). Total num frames: 798720. Throughput: 0: 1006.4. Samples: 200960. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:40:57,228][02958] Avg episode reward: [(0, '5.603')] |
|
[2025-02-16 14:40:57,235][05225] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000195_798720.pth... |
|
[2025-02-16 14:40:57,407][05225] Saving new best policy, reward=5.603! |
|
[2025-02-16 14:41:01,597][05238] Updated weights for policy 0, policy_version 200 (0.0022) |
|
[2025-02-16 14:41:02,221][02958] Fps is (10 sec: 3686.5, 60 sec: 4027.7, 300 sec: 3810.2). Total num frames: 819200. Throughput: 0: 1016.6. Samples: 203598. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:41:02,231][02958] Avg episode reward: [(0, '5.697')] |
|
[2025-02-16 14:41:02,246][05225] Saving new best policy, reward=5.697! |
|
[2025-02-16 14:41:07,221][02958] Fps is (10 sec: 4506.3, 60 sec: 4096.0, 300 sec: 3835.3). Total num frames: 843776. Throughput: 0: 1014.0. Samples: 210494. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:41:07,226][02958] Avg episode reward: [(0, '5.665')] |
|
[2025-02-16 14:41:11,694][05238] Updated weights for policy 0, policy_version 210 (0.0016) |
|
[2025-02-16 14:41:12,221][02958] Fps is (10 sec: 4095.8, 60 sec: 4027.7, 300 sec: 3822.9). Total num frames: 860160. Throughput: 0: 997.9. Samples: 215904. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:41:12,225][02958] Avg episode reward: [(0, '5.580')] |
|
[2025-02-16 14:41:17,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4027.9, 300 sec: 3828.9). Total num frames: 880640. Throughput: 0: 1010.9. Samples: 218952. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 14:41:17,223][02958] Avg episode reward: [(0, '5.531')] |
|
[2025-02-16 14:41:21,298][05238] Updated weights for policy 0, policy_version 220 (0.0021) |
|
[2025-02-16 14:41:22,221][02958] Fps is (10 sec: 4505.8, 60 sec: 4096.0, 300 sec: 3852.0). Total num frames: 905216. Throughput: 0: 1009.9. Samples: 225882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:41:22,224][02958] Avg episode reward: [(0, '5.221')] |
|
[2025-02-16 14:41:27,221][02958] Fps is (10 sec: 3686.3, 60 sec: 3959.4, 300 sec: 3822.9). Total num frames: 917504. Throughput: 0: 997.7. Samples: 230964. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:41:27,230][02958] Avg episode reward: [(0, '5.304')] |
|
[2025-02-16 14:41:31,959][05238] Updated weights for policy 0, policy_version 230 (0.0019) |
|
[2025-02-16 14:41:32,221][02958] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 3845.2). Total num frames: 942080. Throughput: 0: 1010.7. Samples: 234198. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 14:41:32,225][02958] Avg episode reward: [(0, '5.541')] |
|
[2025-02-16 14:41:37,221][02958] Fps is (10 sec: 4915.4, 60 sec: 4096.0, 300 sec: 3866.6). Total num frames: 966656. Throughput: 0: 1019.4. Samples: 241282. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:41:37,223][02958] Avg episode reward: [(0, '5.722')] |
|
[2025-02-16 14:41:37,234][05225] Saving new best policy, reward=5.722! |
|
[2025-02-16 14:41:42,221][02958] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3839.0). Total num frames: 978944. Throughput: 0: 1006.3. Samples: 246242. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:41:42,224][02958] Avg episode reward: [(0, '5.548')] |
|
[2025-02-16 14:41:42,235][05238] Updated weights for policy 0, policy_version 240 (0.0014) |
|
[2025-02-16 14:41:47,221][02958] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 3859.7). Total num frames: 1003520. Throughput: 0: 1026.8. Samples: 249802. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:41:47,225][02958] Avg episode reward: [(0, '5.621')] |
|
[2025-02-16 14:41:50,988][05238] Updated weights for policy 0, policy_version 250 (0.0031) |
|
[2025-02-16 14:41:52,221][02958] Fps is (10 sec: 4915.1, 60 sec: 4096.0, 300 sec: 3879.6). Total num frames: 1028096. Throughput: 0: 1027.2. Samples: 256718. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:41:52,225][02958] Avg episode reward: [(0, '5.662')] |
|
[2025-02-16 14:41:57,221][02958] Fps is (10 sec: 4096.1, 60 sec: 4096.1, 300 sec: 3868.4). Total num frames: 1044480. Throughput: 0: 1016.5. Samples: 261644. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:41:57,226][02958] Avg episode reward: [(0, '5.749')] |
|
[2025-02-16 14:41:57,235][05225] Saving new best policy, reward=5.749! |
|
[2025-02-16 14:42:01,813][05238] Updated weights for policy 0, policy_version 260 (0.0014) |
|
[2025-02-16 14:42:02,221][02958] Fps is (10 sec: 3686.5, 60 sec: 4096.0, 300 sec: 3872.6). Total num frames: 1064960. Throughput: 0: 1023.6. Samples: 265014. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:42:02,224][02958] Avg episode reward: [(0, '6.366')] |
|
[2025-02-16 14:42:02,230][05225] Saving new best policy, reward=6.366! |
|
[2025-02-16 14:42:07,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3876.6). Total num frames: 1085440. Throughput: 0: 1025.2. Samples: 272018. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:42:07,228][02958] Avg episode reward: [(0, '6.511')] |
|
[2025-02-16 14:42:07,280][05225] Saving new best policy, reward=6.511! |
|
[2025-02-16 14:42:12,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4027.8, 300 sec: 3866.1). Total num frames: 1101824. Throughput: 0: 1024.1. Samples: 277050. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 14:42:12,225][02958] Avg episode reward: [(0, '6.625')] |
|
[2025-02-16 14:42:12,227][05225] Saving new best policy, reward=6.625! |
|
[2025-02-16 14:42:12,472][05238] Updated weights for policy 0, policy_version 270 (0.0021) |
|
[2025-02-16 14:42:17,223][02958] Fps is (10 sec: 4095.2, 60 sec: 4095.9, 300 sec: 3884.1). Total num frames: 1126400. Throughput: 0: 1030.9. Samples: 280592. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:42:17,230][02958] Avg episode reward: [(0, '7.384')] |
|
[2025-02-16 14:42:17,236][05225] Saving new best policy, reward=7.384! |
|
[2025-02-16 14:42:20,983][05238] Updated weights for policy 0, policy_version 280 (0.0015) |
|
[2025-02-16 14:42:22,226][02958] Fps is (10 sec: 4912.6, 60 sec: 4095.6, 300 sec: 3901.5). Total num frames: 1150976. Throughput: 0: 1030.9. Samples: 287680. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 14:42:22,228][02958] Avg episode reward: [(0, '7.431')] |
|
[2025-02-16 14:42:22,230][05225] Saving new best policy, reward=7.431! |
|
[2025-02-16 14:42:27,221][02958] Fps is (10 sec: 4096.8, 60 sec: 4164.3, 300 sec: 3957.2). Total num frames: 1167360. Throughput: 0: 1033.0. Samples: 292726. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 14:42:27,228][02958] Avg episode reward: [(0, '7.441')] |
|
[2025-02-16 14:42:27,234][05225] Saving new best policy, reward=7.441! |
|
[2025-02-16 14:42:31,353][05238] Updated weights for policy 0, policy_version 290 (0.0018) |
|
[2025-02-16 14:42:32,221][02958] Fps is (10 sec: 4098.2, 60 sec: 4164.3, 300 sec: 4040.5). Total num frames: 1191936. Throughput: 0: 1033.2. Samples: 296294. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 14:42:32,226][02958] Avg episode reward: [(0, '7.350')] |
|
[2025-02-16 14:42:37,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 1212416. Throughput: 0: 1038.8. Samples: 303462. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-16 14:42:37,225][02958] Avg episode reward: [(0, '7.427')] |
|
[2025-02-16 14:42:41,499][05238] Updated weights for policy 0, policy_version 300 (0.0017) |
|
[2025-02-16 14:42:42,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4164.3, 300 sec: 4026.6). Total num frames: 1228800. Throughput: 0: 1045.9. Samples: 308708. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 14:42:42,225][02958] Avg episode reward: [(0, '7.264')] |
|
[2025-02-16 14:42:47,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 4054.3). Total num frames: 1253376. Throughput: 0: 1050.9. Samples: 312304. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:42:47,223][02958] Avg episode reward: [(0, '7.608')] |
|
[2025-02-16 14:42:47,284][05225] Saving new best policy, reward=7.608! |
|
[2025-02-16 14:42:49,942][05238] Updated weights for policy 0, policy_version 310 (0.0027) |
|
[2025-02-16 14:42:52,223][02958] Fps is (10 sec: 4504.6, 60 sec: 4095.9, 300 sec: 4040.4). Total num frames: 1273856. Throughput: 0: 1046.4. Samples: 319108. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:42:52,229][02958] Avg episode reward: [(0, '8.100')] |
|
[2025-02-16 14:42:52,233][05225] Saving new best policy, reward=8.100! |
|
[2025-02-16 14:42:57,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 4054.4). Total num frames: 1294336. Throughput: 0: 1052.0. Samples: 324390. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-16 14:42:57,224][02958] Avg episode reward: [(0, '8.395')] |
|
[2025-02-16 14:42:57,233][05225] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000316_1294336.pth... |
|
[2025-02-16 14:42:57,363][05225] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000077_315392.pth |
|
[2025-02-16 14:42:57,375][05225] Saving new best policy, reward=8.395! |
|
[2025-02-16 14:43:00,485][05238] Updated weights for policy 0, policy_version 320 (0.0024) |
|
[2025-02-16 14:43:02,221][02958] Fps is (10 sec: 4506.6, 60 sec: 4232.5, 300 sec: 4068.2). Total num frames: 1318912. Throughput: 0: 1050.5. Samples: 327862. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:43:02,222][02958] Avg episode reward: [(0, '8.276')] |
|
[2025-02-16 14:43:07,223][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4068.3). Total num frames: 1339392. Throughput: 0: 1042.3. Samples: 334578. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:43:07,228][02958] Avg episode reward: [(0, '8.668')] |
|
[2025-02-16 14:43:07,238][05225] Saving new best policy, reward=8.668! |
|
[2025-02-16 14:43:10,906][05238] Updated weights for policy 0, policy_version 330 (0.0032) |
|
[2025-02-16 14:43:12,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4068.2). Total num frames: 1355776. Throughput: 0: 1052.1. Samples: 340072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:43:12,223][02958] Avg episode reward: [(0, '9.907')] |
|
[2025-02-16 14:43:12,232][05225] Saving new best policy, reward=9.907! |
|
[2025-02-16 14:43:17,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.7, 300 sec: 4082.1). Total num frames: 1380352. Throughput: 0: 1052.3. Samples: 343648. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:43:17,226][02958] Avg episode reward: [(0, '10.920')] |
|
[2025-02-16 14:43:17,232][05225] Saving new best policy, reward=10.920! |
|
[2025-02-16 14:43:19,536][05238] Updated weights for policy 0, policy_version 340 (0.0019) |
|
[2025-02-16 14:43:22,223][02958] Fps is (10 sec: 4095.1, 60 sec: 4096.2, 300 sec: 4068.2). Total num frames: 1396736. Throughput: 0: 1034.2. Samples: 350002. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-16 14:43:22,227][02958] Avg episode reward: [(0, '10.952')] |
|
[2025-02-16 14:43:22,230][05225] Saving new best policy, reward=10.952! |
|
[2025-02-16 14:43:27,221][02958] Fps is (10 sec: 3686.3, 60 sec: 4164.3, 300 sec: 4082.1). Total num frames: 1417216. Throughput: 0: 1044.1. Samples: 355694. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:43:27,224][02958] Avg episode reward: [(0, '11.569')] |
|
[2025-02-16 14:43:27,233][05225] Saving new best policy, reward=11.569! |
|
[2025-02-16 14:43:29,961][05238] Updated weights for policy 0, policy_version 350 (0.0015) |
|
[2025-02-16 14:43:32,221][02958] Fps is (10 sec: 4506.6, 60 sec: 4164.3, 300 sec: 4096.0). Total num frames: 1441792. Throughput: 0: 1042.5. Samples: 359216. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:43:32,224][02958] Avg episode reward: [(0, '11.686')] |
|
[2025-02-16 14:43:32,232][05225] Saving new best policy, reward=11.686! |
|
[2025-02-16 14:43:37,221][02958] Fps is (10 sec: 4505.7, 60 sec: 4164.3, 300 sec: 4082.1). Total num frames: 1462272. Throughput: 0: 1032.7. Samples: 365576. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-16 14:43:37,225][02958] Avg episode reward: [(0, '11.302')] |
|
[2025-02-16 14:43:40,181][05238] Updated weights for policy 0, policy_version 360 (0.0022) |
|
[2025-02-16 14:43:42,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4110.0). Total num frames: 1482752. Throughput: 0: 1046.6. Samples: 371488. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:43:42,229][02958] Avg episode reward: [(0, '10.794')] |
|
[2025-02-16 14:43:47,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4123.8). Total num frames: 1507328. Throughput: 0: 1050.2. Samples: 375122. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:43:47,223][02958] Avg episode reward: [(0, '12.249')] |
|
[2025-02-16 14:43:47,229][05225] Saving new best policy, reward=12.249! |
|
[2025-02-16 14:43:48,741][05238] Updated weights for policy 0, policy_version 370 (0.0023) |
|
[2025-02-16 14:43:52,223][02958] Fps is (10 sec: 4095.2, 60 sec: 4164.3, 300 sec: 4096.0). Total num frames: 1523712. Throughput: 0: 1035.2. Samples: 381164. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:43:52,231][02958] Avg episode reward: [(0, '12.174')] |
|
[2025-02-16 14:43:57,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4164.3, 300 sec: 4109.9). Total num frames: 1544192. Throughput: 0: 1047.3. Samples: 387202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:43:57,227][02958] Avg episode reward: [(0, '11.931')] |
|
[2025-02-16 14:43:59,202][05238] Updated weights for policy 0, policy_version 380 (0.0020) |
|
[2025-02-16 14:44:02,221][02958] Fps is (10 sec: 4506.4, 60 sec: 4164.3, 300 sec: 4109.9). Total num frames: 1568768. Throughput: 0: 1047.6. Samples: 390790. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:44:02,226][02958] Avg episode reward: [(0, '10.892')] |
|
[2025-02-16 14:44:07,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 4096.0). Total num frames: 1585152. Throughput: 0: 1042.2. Samples: 396900. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:44:07,225][02958] Avg episode reward: [(0, '10.834')] |
|
[2025-02-16 14:44:09,384][05238] Updated weights for policy 0, policy_version 390 (0.0017) |
|
[2025-02-16 14:44:12,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4123.8). Total num frames: 1609728. Throughput: 0: 1057.7. Samples: 403290. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:44:12,227][02958] Avg episode reward: [(0, '11.081')] |
|
[2025-02-16 14:44:17,223][02958] Fps is (10 sec: 4914.3, 60 sec: 4232.4, 300 sec: 4123.7). Total num frames: 1634304. Throughput: 0: 1060.4. Samples: 406934. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:44:17,228][02958] Avg episode reward: [(0, '12.201')] |
|
[2025-02-16 14:44:17,814][05238] Updated weights for policy 0, policy_version 400 (0.0022) |
|
[2025-02-16 14:44:22,228][02958] Fps is (10 sec: 4093.2, 60 sec: 4232.2, 300 sec: 4109.8). Total num frames: 1650688. Throughput: 0: 1047.6. Samples: 412724. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:44:22,230][02958] Avg episode reward: [(0, '12.922')] |
|
[2025-02-16 14:44:22,240][05225] Saving new best policy, reward=12.922! |
|
[2025-02-16 14:44:27,221][02958] Fps is (10 sec: 3687.1, 60 sec: 4232.5, 300 sec: 4123.8). Total num frames: 1671168. Throughput: 0: 1059.8. Samples: 419178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:44:27,223][02958] Avg episode reward: [(0, '13.843')] |
|
[2025-02-16 14:44:27,287][05225] Saving new best policy, reward=13.843! |
|
[2025-02-16 14:44:28,336][05238] Updated weights for policy 0, policy_version 410 (0.0012) |
|
[2025-02-16 14:44:32,221][02958] Fps is (10 sec: 4508.7, 60 sec: 4232.5, 300 sec: 4137.7). Total num frames: 1695744. Throughput: 0: 1055.9. Samples: 422638. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:44:32,223][02958] Avg episode reward: [(0, '12.969')] |
|
[2025-02-16 14:44:37,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 4109.9). Total num frames: 1712128. Throughput: 0: 1047.4. Samples: 428296. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:44:37,223][02958] Avg episode reward: [(0, '13.424')] |
|
[2025-02-16 14:44:38,570][05238] Updated weights for policy 0, policy_version 420 (0.0014) |
|
[2025-02-16 14:44:42,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4137.7). Total num frames: 1736704. Throughput: 0: 1065.1. Samples: 435132. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:44:42,223][02958] Avg episode reward: [(0, '13.921')] |
|
[2025-02-16 14:44:42,228][05225] Saving new best policy, reward=13.921! |
|
[2025-02-16 14:44:46,957][05238] Updated weights for policy 0, policy_version 430 (0.0034) |
|
[2025-02-16 14:44:47,222][02958] Fps is (10 sec: 4914.5, 60 sec: 4232.4, 300 sec: 4151.5). Total num frames: 1761280. Throughput: 0: 1065.3. Samples: 438732. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:44:47,225][02958] Avg episode reward: [(0, '13.637')] |
|
[2025-02-16 14:44:52,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.7, 300 sec: 4137.7). Total num frames: 1777664. Throughput: 0: 1049.1. Samples: 444110. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:44:52,223][02958] Avg episode reward: [(0, '13.823')] |
|
[2025-02-16 14:44:57,083][05238] Updated weights for policy 0, policy_version 440 (0.0032) |
|
[2025-02-16 14:44:57,221][02958] Fps is (10 sec: 4096.6, 60 sec: 4300.8, 300 sec: 4151.5). Total num frames: 1802240. Throughput: 0: 1062.2. Samples: 451088. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:44:57,223][02958] Avg episode reward: [(0, '14.337')] |
|
[2025-02-16 14:44:57,230][05225] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000440_1802240.pth... |
|
[2025-02-16 14:44:57,353][05225] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000195_798720.pth |
|
[2025-02-16 14:44:57,374][05225] Saving new best policy, reward=14.337! |
|
[2025-02-16 14:45:02,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4151.5). Total num frames: 1822720. Throughput: 0: 1058.7. Samples: 454572. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:45:02,232][02958] Avg episode reward: [(0, '14.220')] |
|
[2025-02-16 14:45:07,221][02958] Fps is (10 sec: 3686.3, 60 sec: 4232.5, 300 sec: 4137.6). Total num frames: 1839104. Throughput: 0: 1042.5. Samples: 459632. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:45:07,223][02958] Avg episode reward: [(0, '14.849')] |
|
[2025-02-16 14:45:07,231][05225] Saving new best policy, reward=14.849! |
|
[2025-02-16 14:45:07,615][05238] Updated weights for policy 0, policy_version 450 (0.0032) |
|
[2025-02-16 14:45:12,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4151.6). Total num frames: 1863680. Throughput: 0: 1058.8. Samples: 466824. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:45:12,222][02958] Avg episode reward: [(0, '14.819')] |
|
[2025-02-16 14:45:16,151][05238] Updated weights for policy 0, policy_version 460 (0.0025) |
|
[2025-02-16 14:45:17,224][02958] Fps is (10 sec: 4504.5, 60 sec: 4164.2, 300 sec: 4151.5). Total num frames: 1884160. Throughput: 0: 1064.2. Samples: 470528. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:45:17,228][02958] Avg episode reward: [(0, '14.952')] |
|
[2025-02-16 14:45:17,234][05225] Saving new best policy, reward=14.952! |
|
[2025-02-16 14:45:22,229][02958] Fps is (10 sec: 3683.5, 60 sec: 4164.2, 300 sec: 4137.5). Total num frames: 1900544. Throughput: 0: 1050.6. Samples: 475580. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:45:22,231][02958] Avg episode reward: [(0, '15.702')] |
|
[2025-02-16 14:45:22,279][05225] Saving new best policy, reward=15.702! |
|
[2025-02-16 14:45:26,431][05238] Updated weights for policy 0, policy_version 470 (0.0028) |
|
[2025-02-16 14:45:27,221][02958] Fps is (10 sec: 4506.9, 60 sec: 4300.8, 300 sec: 4165.4). Total num frames: 1929216. Throughput: 0: 1056.5. Samples: 482674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:45:27,223][02958] Avg episode reward: [(0, '15.149')] |
|
[2025-02-16 14:45:32,221][02958] Fps is (10 sec: 4919.1, 60 sec: 4232.5, 300 sec: 4165.4). Total num frames: 1949696. Throughput: 0: 1056.7. Samples: 486284. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:45:32,223][02958] Avg episode reward: [(0, '16.597')] |
|
[2025-02-16 14:45:32,225][05225] Saving new best policy, reward=16.597! |
|
[2025-02-16 14:45:36,667][05238] Updated weights for policy 0, policy_version 480 (0.0024) |
|
[2025-02-16 14:45:37,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4151.6). Total num frames: 1966080. Throughput: 0: 1049.4. Samples: 491332. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:45:37,227][02958] Avg episode reward: [(0, '15.668')] |
|
[2025-02-16 14:45:42,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4165.4). Total num frames: 1990656. Throughput: 0: 1056.4. Samples: 498628. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:45:42,226][02958] Avg episode reward: [(0, '15.389')] |
|
[2025-02-16 14:45:45,418][05238] Updated weights for policy 0, policy_version 490 (0.0019) |
|
[2025-02-16 14:45:47,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4164.4, 300 sec: 4165.4). Total num frames: 2011136. Throughput: 0: 1060.8. Samples: 502306. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 14:45:47,222][02958] Avg episode reward: [(0, '14.751')] |
|
[2025-02-16 14:45:52,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4232.5, 300 sec: 4179.3). Total num frames: 2031616. Throughput: 0: 1064.3. Samples: 507526. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:45:52,228][02958] Avg episode reward: [(0, '14.635')] |
|
[2025-02-16 14:45:55,462][05238] Updated weights for policy 0, policy_version 500 (0.0029) |
|
[2025-02-16 14:45:57,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4193.2). Total num frames: 2056192. Throughput: 0: 1062.8. Samples: 514650. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:45:57,223][02958] Avg episode reward: [(0, '15.667')] |
|
[2025-02-16 14:46:02,227][02958] Fps is (10 sec: 4503.0, 60 sec: 4232.1, 300 sec: 4179.2). Total num frames: 2076672. Throughput: 0: 1056.3. Samples: 518066. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:46:02,230][02958] Avg episode reward: [(0, '16.792')] |
|
[2025-02-16 14:46:02,234][05225] Saving new best policy, reward=16.792! |
|
[2025-02-16 14:46:05,787][05238] Updated weights for policy 0, policy_version 510 (0.0035) |
|
[2025-02-16 14:46:07,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.6, 300 sec: 4179.3). Total num frames: 2093056. Throughput: 0: 1063.5. Samples: 523430. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:46:07,223][02958] Avg episode reward: [(0, '17.479')] |
|
[2025-02-16 14:46:07,232][05225] Saving new best policy, reward=17.479! |
|
[2025-02-16 14:46:12,221][02958] Fps is (10 sec: 4098.4, 60 sec: 4232.5, 300 sec: 4193.2). Total num frames: 2117632. Throughput: 0: 1067.5. Samples: 530710. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-16 14:46:12,223][02958] Avg episode reward: [(0, '17.353')] |
|
[2025-02-16 14:46:14,264][05238] Updated weights for policy 0, policy_version 520 (0.0022) |
|
[2025-02-16 14:46:17,224][02958] Fps is (10 sec: 4503.9, 60 sec: 4232.5, 300 sec: 4179.3). Total num frames: 2138112. Throughput: 0: 1059.1. Samples: 533946. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 14:46:17,227][02958] Avg episode reward: [(0, '17.707')] |
|
[2025-02-16 14:46:17,238][05225] Saving new best policy, reward=17.707! |
|
[2025-02-16 14:46:22,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4301.4, 300 sec: 4207.1). Total num frames: 2158592. Throughput: 0: 1066.0. Samples: 539300. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:46:22,222][02958] Avg episode reward: [(0, '18.668')] |
|
[2025-02-16 14:46:22,228][05225] Saving new best policy, reward=18.668! |
|
[2025-02-16 14:46:24,675][05238] Updated weights for policy 0, policy_version 530 (0.0014) |
|
[2025-02-16 14:46:27,221][02958] Fps is (10 sec: 4507.3, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 2183168. Throughput: 0: 1063.9. Samples: 546504. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-16 14:46:27,222][02958] Avg episode reward: [(0, '18.500')] |
|
[2025-02-16 14:46:32,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 4179.3). Total num frames: 2199552. Throughput: 0: 1049.2. Samples: 549522. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 14:46:32,225][02958] Avg episode reward: [(0, '18.193')] |
|
[2025-02-16 14:46:34,967][05238] Updated weights for policy 0, policy_version 540 (0.0013) |
|
[2025-02-16 14:46:37,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 2220032. Throughput: 0: 1059.6. Samples: 555210. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 14:46:37,223][02958] Avg episode reward: [(0, '18.308')] |
|
[2025-02-16 14:46:42,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 2244608. Throughput: 0: 1066.1. Samples: 562624. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:46:42,225][02958] Avg episode reward: [(0, '18.256')] |
|
[2025-02-16 14:46:43,189][05238] Updated weights for policy 0, policy_version 550 (0.0023) |
|
[2025-02-16 14:46:47,223][02958] Fps is (10 sec: 4504.6, 60 sec: 4232.4, 300 sec: 4193.2). Total num frames: 2265088. Throughput: 0: 1052.1. Samples: 565406. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:46:47,229][02958] Avg episode reward: [(0, '19.725')] |
|
[2025-02-16 14:46:47,239][05225] Saving new best policy, reward=19.725! |
|
[2025-02-16 14:46:52,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 2285568. Throughput: 0: 1063.8. Samples: 571300. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:46:52,225][02958] Avg episode reward: [(0, '19.575')] |
|
[2025-02-16 14:46:53,636][05238] Updated weights for policy 0, policy_version 560 (0.0025) |
|
[2025-02-16 14:46:57,221][02958] Fps is (10 sec: 4506.6, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 2310144. Throughput: 0: 1061.4. Samples: 578474. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:46:57,223][02958] Avg episode reward: [(0, '20.086')] |
|
[2025-02-16 14:46:57,229][05225] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000564_2310144.pth... |
|
[2025-02-16 14:46:57,352][05225] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000316_1294336.pth |
|
[2025-02-16 14:46:57,370][05225] Saving new best policy, reward=20.086! |
|
[2025-02-16 14:47:02,223][02958] Fps is (10 sec: 4095.4, 60 sec: 4164.6, 300 sec: 4207.1). Total num frames: 2326528. Throughput: 0: 1043.0. Samples: 580878. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:47:02,228][02958] Avg episode reward: [(0, '20.104')] |
|
[2025-02-16 14:47:02,230][05225] Saving new best policy, reward=20.104! |
|
[2025-02-16 14:47:03,977][05238] Updated weights for policy 0, policy_version 570 (0.0024) |
|
[2025-02-16 14:47:07,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4300.8, 300 sec: 4234.8). Total num frames: 2351104. Throughput: 0: 1060.3. Samples: 587012. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:47:07,224][02958] Avg episode reward: [(0, '17.953')] |
|
[2025-02-16 14:47:12,174][05238] Updated weights for policy 0, policy_version 580 (0.0018) |
|
[2025-02-16 14:47:12,221][02958] Fps is (10 sec: 4916.1, 60 sec: 4300.8, 300 sec: 4234.9). Total num frames: 2375680. Throughput: 0: 1064.1. Samples: 594388. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:47:12,224][02958] Avg episode reward: [(0, '17.157')] |
|
[2025-02-16 14:47:17,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4164.5, 300 sec: 4193.3). Total num frames: 2387968. Throughput: 0: 1048.0. Samples: 596680. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:47:17,224][02958] Avg episode reward: [(0, '16.643')] |
|
[2025-02-16 14:47:22,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 2412544. Throughput: 0: 1066.0. Samples: 603178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:47:22,227][02958] Avg episode reward: [(0, '16.985')] |
|
[2025-02-16 14:47:22,423][05238] Updated weights for policy 0, policy_version 590 (0.0014) |
|
[2025-02-16 14:47:27,221][02958] Fps is (10 sec: 4915.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 2437120. Throughput: 0: 1057.5. Samples: 610210. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:47:27,223][02958] Avg episode reward: [(0, '18.337')] |
|
[2025-02-16 14:47:32,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 2453504. Throughput: 0: 1045.3. Samples: 612440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:47:32,224][02958] Avg episode reward: [(0, '18.766')] |
|
[2025-02-16 14:47:32,786][05238] Updated weights for policy 0, policy_version 600 (0.0017) |
|
[2025-02-16 14:47:37,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4300.8, 300 sec: 4234.8). Total num frames: 2478080. Throughput: 0: 1063.1. Samples: 619138. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:47:37,223][02958] Avg episode reward: [(0, '21.101')] |
|
[2025-02-16 14:47:37,230][05225] Saving new best policy, reward=21.101! |
|
[2025-02-16 14:47:41,067][05238] Updated weights for policy 0, policy_version 610 (0.0017) |
|
[2025-02-16 14:47:42,221][02958] Fps is (10 sec: 4915.2, 60 sec: 4300.8, 300 sec: 4234.8). Total num frames: 2502656. Throughput: 0: 1058.2. Samples: 626092. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-16 14:47:42,228][02958] Avg episode reward: [(0, '19.701')] |
|
[2025-02-16 14:47:47,221][02958] Fps is (10 sec: 4096.1, 60 sec: 4232.7, 300 sec: 4221.0). Total num frames: 2519040. Throughput: 0: 1052.2. Samples: 628224. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:47:47,225][02958] Avg episode reward: [(0, '19.419')] |
|
[2025-02-16 14:47:51,404][05238] Updated weights for policy 0, policy_version 620 (0.0022) |
|
[2025-02-16 14:47:52,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4300.8, 300 sec: 4234.8). Total num frames: 2543616. Throughput: 0: 1070.7. Samples: 635192. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:47:52,226][02958] Avg episode reward: [(0, '19.212')] |
|
[2025-02-16 14:47:57,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 2564096. Throughput: 0: 1051.3. Samples: 641696. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:47:57,228][02958] Avg episode reward: [(0, '17.727')] |
|
[2025-02-16 14:48:01,759][05238] Updated weights for policy 0, policy_version 630 (0.0014) |
|
[2025-02-16 14:48:02,221][02958] Fps is (10 sec: 3686.5, 60 sec: 4232.7, 300 sec: 4207.1). Total num frames: 2580480. Throughput: 0: 1048.8. Samples: 643874. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:48:02,223][02958] Avg episode reward: [(0, '18.376')] |
|
[2025-02-16 14:48:07,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4234.8). Total num frames: 2605056. Throughput: 0: 1062.4. Samples: 650988. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:48:07,223][02958] Avg episode reward: [(0, '18.392')] |
|
[2025-02-16 14:48:10,143][05238] Updated weights for policy 0, policy_version 640 (0.0015) |
|
[2025-02-16 14:48:12,221][02958] Fps is (10 sec: 4505.5, 60 sec: 4164.3, 300 sec: 4221.0). Total num frames: 2625536. Throughput: 0: 1052.9. Samples: 657592. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:48:12,228][02958] Avg episode reward: [(0, '17.950')] |
|
[2025-02-16 14:48:17,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4300.8, 300 sec: 4234.9). Total num frames: 2646016. Throughput: 0: 1054.1. Samples: 659874. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:48:17,224][02958] Avg episode reward: [(0, '18.518')] |
|
[2025-02-16 14:48:20,316][05238] Updated weights for policy 0, policy_version 650 (0.0017) |
|
[2025-02-16 14:48:22,221][02958] Fps is (10 sec: 4505.7, 60 sec: 4300.8, 300 sec: 4248.7). Total num frames: 2670592. Throughput: 0: 1067.7. Samples: 667184. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:48:22,227][02958] Avg episode reward: [(0, '18.745')] |
|
[2025-02-16 14:48:27,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.6, 300 sec: 4234.8). Total num frames: 2691072. Throughput: 0: 1049.0. Samples: 673298. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:48:27,227][02958] Avg episode reward: [(0, '18.786')] |
|
[2025-02-16 14:48:30,636][05238] Updated weights for policy 0, policy_version 660 (0.0017) |
|
[2025-02-16 14:48:32,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 2707456. Throughput: 0: 1057.0. Samples: 675790. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:48:32,222][02958] Avg episode reward: [(0, '19.666')] |
|
[2025-02-16 14:48:37,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4234.8). Total num frames: 2732032. Throughput: 0: 1064.3. Samples: 683086. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:48:37,223][02958] Avg episode reward: [(0, '20.015')] |
|
[2025-02-16 14:48:38,953][05238] Updated weights for policy 0, policy_version 670 (0.0022) |
|
[2025-02-16 14:48:42,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4221.0). Total num frames: 2752512. Throughput: 0: 1052.9. Samples: 689078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:48:42,223][02958] Avg episode reward: [(0, '20.828')] |
|
[2025-02-16 14:48:47,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4232.5, 300 sec: 4234.9). Total num frames: 2772992. Throughput: 0: 1066.5. Samples: 691866. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:48:47,229][02958] Avg episode reward: [(0, '21.020')] |
|
[2025-02-16 14:48:49,288][05238] Updated weights for policy 0, policy_version 680 (0.0016) |
|
[2025-02-16 14:48:52,221][02958] Fps is (10 sec: 4505.5, 60 sec: 4232.5, 300 sec: 4248.7). Total num frames: 2797568. Throughput: 0: 1070.0. Samples: 699140. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-16 14:48:52,227][02958] Avg episode reward: [(0, '21.005')] |
|
[2025-02-16 14:48:57,224][02958] Fps is (10 sec: 4094.9, 60 sec: 4164.1, 300 sec: 4220.9). Total num frames: 2813952. Throughput: 0: 1049.5. Samples: 704824. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:48:57,229][02958] Avg episode reward: [(0, '20.314')] |
|
[2025-02-16 14:48:57,282][05225] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000688_2818048.pth... |
|
[2025-02-16 14:48:57,437][05225] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000440_1802240.pth |
|
[2025-02-16 14:48:59,603][05238] Updated weights for policy 0, policy_version 690 (0.0029) |
|
[2025-02-16 14:49:02,221][02958] Fps is (10 sec: 4096.1, 60 sec: 4300.8, 300 sec: 4248.7). Total num frames: 2838528. Throughput: 0: 1063.8. Samples: 707746. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-16 14:49:02,225][02958] Avg episode reward: [(0, '18.476')] |
|
[2025-02-16 14:49:07,221][02958] Fps is (10 sec: 4916.6, 60 sec: 4300.8, 300 sec: 4248.7). Total num frames: 2863104. Throughput: 0: 1064.1. Samples: 715070. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:49:07,230][02958] Avg episode reward: [(0, '17.787')] |
|
[2025-02-16 14:49:07,982][05238] Updated weights for policy 0, policy_version 700 (0.0022) |
|
[2025-02-16 14:49:12,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 2879488. Throughput: 0: 1049.2. Samples: 720512. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-16 14:49:12,226][02958] Avg episode reward: [(0, '17.949')] |
|
[2025-02-16 14:49:17,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4234.9). Total num frames: 2899968. Throughput: 0: 1064.4. Samples: 723688. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:49:17,229][02958] Avg episode reward: [(0, '17.840')] |
|
[2025-02-16 14:49:18,367][05238] Updated weights for policy 0, policy_version 710 (0.0021) |
|
[2025-02-16 14:49:22,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4248.7). Total num frames: 2924544. Throughput: 0: 1064.8. Samples: 731000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:49:22,223][02958] Avg episode reward: [(0, '18.704')] |
|
[2025-02-16 14:49:27,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 4221.0). Total num frames: 2940928. Throughput: 0: 1049.2. Samples: 736290. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:49:27,228][02958] Avg episode reward: [(0, '20.185')] |
|
[2025-02-16 14:49:28,743][05238] Updated weights for policy 0, policy_version 720 (0.0021) |
|
[2025-02-16 14:49:32,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4300.8, 300 sec: 4248.7). Total num frames: 2965504. Throughput: 0: 1061.7. Samples: 739642. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:49:32,227][02958] Avg episode reward: [(0, '19.973')] |
|
[2025-02-16 14:49:37,148][05238] Updated weights for policy 0, policy_version 730 (0.0021) |
|
[2025-02-16 14:49:37,221][02958] Fps is (10 sec: 4915.2, 60 sec: 4300.8, 300 sec: 4248.7). Total num frames: 2990080. Throughput: 0: 1061.8. Samples: 746920. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:49:37,224][02958] Avg episode reward: [(0, '21.563')] |
|
[2025-02-16 14:49:37,230][05225] Saving new best policy, reward=21.563! |
|
[2025-02-16 14:49:42,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 3006464. Throughput: 0: 1049.2. Samples: 752036. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:49:42,227][02958] Avg episode reward: [(0, '21.233')] |
|
[2025-02-16 14:49:47,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4234.8). Total num frames: 3026944. Throughput: 0: 1063.3. Samples: 755596. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:49:47,225][02958] Avg episode reward: [(0, '21.825')] |
|
[2025-02-16 14:49:47,240][05225] Saving new best policy, reward=21.825! |
|
[2025-02-16 14:49:47,484][05238] Updated weights for policy 0, policy_version 740 (0.0023) |
|
[2025-02-16 14:49:52,222][02958] Fps is (10 sec: 4505.4, 60 sec: 4232.5, 300 sec: 4234.8). Total num frames: 3051520. Throughput: 0: 1062.7. Samples: 762894. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:49:52,227][02958] Avg episode reward: [(0, '21.031')] |
|
[2025-02-16 14:49:57,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4232.7, 300 sec: 4221.0). Total num frames: 3067904. Throughput: 0: 1053.7. Samples: 767928. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:49:57,225][02958] Avg episode reward: [(0, '20.917')] |
|
[2025-02-16 14:49:57,661][05238] Updated weights for policy 0, policy_version 750 (0.0018) |
|
[2025-02-16 14:50:02,221][02958] Fps is (10 sec: 4096.3, 60 sec: 4232.5, 300 sec: 4248.7). Total num frames: 3092480. Throughput: 0: 1064.1. Samples: 771572. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:50:02,223][02958] Avg episode reward: [(0, '20.779')] |
|
[2025-02-16 14:50:06,249][05238] Updated weights for policy 0, policy_version 760 (0.0025) |
|
[2025-02-16 14:50:07,222][02958] Fps is (10 sec: 4914.9, 60 sec: 4232.5, 300 sec: 4248.7). Total num frames: 3117056. Throughput: 0: 1063.5. Samples: 778858. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:50:07,226][02958] Avg episode reward: [(0, '19.673')] |
|
[2025-02-16 14:50:12,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4234.9). Total num frames: 3133440. Throughput: 0: 1059.5. Samples: 783968. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-16 14:50:12,224][02958] Avg episode reward: [(0, '19.677')] |
|
[2025-02-16 14:50:16,398][05238] Updated weights for policy 0, policy_version 770 (0.0012) |
|
[2025-02-16 14:50:17,221][02958] Fps is (10 sec: 3686.7, 60 sec: 4232.5, 300 sec: 4248.8). Total num frames: 3153920. Throughput: 0: 1067.2. Samples: 787664. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:50:17,224][02958] Avg episode reward: [(0, '20.872')] |
|
[2025-02-16 14:50:22,231][02958] Fps is (10 sec: 4501.2, 60 sec: 4231.8, 300 sec: 4234.7). Total num frames: 3178496. Throughput: 0: 1068.1. Samples: 794994. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:50:22,233][02958] Avg episode reward: [(0, '20.394')] |
|
[2025-02-16 14:50:26,557][05238] Updated weights for policy 0, policy_version 780 (0.0025) |
|
[2025-02-16 14:50:27,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 3194880. Throughput: 0: 1064.9. Samples: 799956. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:50:27,224][02958] Avg episode reward: [(0, '20.562')] |
|
[2025-02-16 14:50:32,221][02958] Fps is (10 sec: 4100.0, 60 sec: 4232.5, 300 sec: 4248.7). Total num frames: 3219456. Throughput: 0: 1065.6. Samples: 803548. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:50:32,224][02958] Avg episode reward: [(0, '20.140')] |
|
[2025-02-16 14:50:35,140][05238] Updated weights for policy 0, policy_version 790 (0.0015) |
|
[2025-02-16 14:50:37,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4234.8). Total num frames: 3239936. Throughput: 0: 1063.0. Samples: 810726. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:50:37,226][02958] Avg episode reward: [(0, '20.054')] |
|
[2025-02-16 14:50:42,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4232.6, 300 sec: 4234.8). Total num frames: 3260416. Throughput: 0: 1068.9. Samples: 816030. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:50:42,227][02958] Avg episode reward: [(0, '20.007')] |
|
[2025-02-16 14:50:45,226][05238] Updated weights for policy 0, policy_version 800 (0.0031) |
|
[2025-02-16 14:50:47,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4300.8, 300 sec: 4248.7). Total num frames: 3284992. Throughput: 0: 1070.4. Samples: 819738. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:50:47,229][02958] Avg episode reward: [(0, '19.741')] |
|
[2025-02-16 14:50:52,226][02958] Fps is (10 sec: 4503.5, 60 sec: 4232.2, 300 sec: 4234.8). Total num frames: 3305472. Throughput: 0: 1062.7. Samples: 826682. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:50:52,228][02958] Avg episode reward: [(0, '20.172')] |
|
[2025-02-16 14:50:55,393][05238] Updated weights for policy 0, policy_version 810 (0.0020) |
|
[2025-02-16 14:50:57,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4300.8, 300 sec: 4234.9). Total num frames: 3325952. Throughput: 0: 1068.0. Samples: 832028. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:50:57,223][02958] Avg episode reward: [(0, '19.648')] |
|
[2025-02-16 14:50:57,232][05225] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000812_3325952.pth... |
|
[2025-02-16 14:50:57,361][05225] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000564_2310144.pth |
|
[2025-02-16 14:51:02,223][02958] Fps is (10 sec: 4097.2, 60 sec: 4232.4, 300 sec: 4248.7). Total num frames: 3346432. Throughput: 0: 1065.1. Samples: 835594. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:51:02,224][02958] Avg episode reward: [(0, '20.050')] |
|
[2025-02-16 14:51:03,991][05238] Updated weights for policy 0, policy_version 820 (0.0015) |
|
[2025-02-16 14:51:07,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 4234.8). Total num frames: 3366912. Throughput: 0: 1051.5. Samples: 842302. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:51:07,224][02958] Avg episode reward: [(0, '20.249')] |
|
[2025-02-16 14:51:12,221][02958] Fps is (10 sec: 4096.7, 60 sec: 4232.5, 300 sec: 4234.9). Total num frames: 3387392. Throughput: 0: 1067.6. Samples: 847998. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:51:12,226][02958] Avg episode reward: [(0, '19.024')] |
|
[2025-02-16 14:51:14,246][05238] Updated weights for policy 0, policy_version 830 (0.0022) |
|
[2025-02-16 14:51:17,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4300.8, 300 sec: 4248.7). Total num frames: 3411968. Throughput: 0: 1069.3. Samples: 851668. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:51:17,225][02958] Avg episode reward: [(0, '20.108')] |
|
[2025-02-16 14:51:22,221][02958] Fps is (10 sec: 4505.5, 60 sec: 4233.2, 300 sec: 4234.8). Total num frames: 3432448. Throughput: 0: 1056.6. Samples: 858274. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:51:22,226][02958] Avg episode reward: [(0, '19.590')] |
|
[2025-02-16 14:51:24,255][05238] Updated weights for policy 0, policy_version 840 (0.0033) |
|
[2025-02-16 14:51:27,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4300.8, 300 sec: 4248.7). Total num frames: 3452928. Throughput: 0: 1065.5. Samples: 863978. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:51:27,223][02958] Avg episode reward: [(0, '20.684')] |
|
[2025-02-16 14:51:32,221][02958] Fps is (10 sec: 4505.7, 60 sec: 4300.8, 300 sec: 4262.6). Total num frames: 3477504. Throughput: 0: 1064.8. Samples: 867656. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:51:32,224][02958] Avg episode reward: [(0, '22.690')] |
|
[2025-02-16 14:51:32,227][05225] Saving new best policy, reward=22.690! |
|
[2025-02-16 14:51:32,922][05238] Updated weights for policy 0, policy_version 850 (0.0036) |
|
[2025-02-16 14:51:37,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4234.8). Total num frames: 3493888. Throughput: 0: 1048.8. Samples: 873874. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:51:37,223][02958] Avg episode reward: [(0, '22.302')] |
|
[2025-02-16 14:51:42,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4234.9). Total num frames: 3514368. Throughput: 0: 1067.2. Samples: 880050. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:51:42,223][02958] Avg episode reward: [(0, '23.345')] |
|
[2025-02-16 14:51:42,256][05225] Saving new best policy, reward=23.345! |
|
[2025-02-16 14:51:43,169][05238] Updated weights for policy 0, policy_version 860 (0.0018) |
|
[2025-02-16 14:51:47,221][02958] Fps is (10 sec: 4505.7, 60 sec: 4232.5, 300 sec: 4248.7). Total num frames: 3538944. Throughput: 0: 1067.3. Samples: 883620. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:51:47,223][02958] Avg episode reward: [(0, '23.726')] |
|
[2025-02-16 14:51:47,231][05225] Saving new best policy, reward=23.726! |
|
[2025-02-16 14:51:52,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.9, 300 sec: 4234.8). Total num frames: 3559424. Throughput: 0: 1051.8. Samples: 889632. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-16 14:51:52,223][02958] Avg episode reward: [(0, '23.126')] |
|
[2025-02-16 14:51:53,321][05238] Updated weights for policy 0, policy_version 870 (0.0031) |
|
[2025-02-16 14:51:57,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4248.8). Total num frames: 3579904. Throughput: 0: 1064.0. Samples: 895878. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 14:51:57,223][02958] Avg episode reward: [(0, '21.351')] |
|
[2025-02-16 14:52:01,825][05238] Updated weights for policy 0, policy_version 880 (0.0022) |
|
[2025-02-16 14:52:02,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4300.9, 300 sec: 4248.7). Total num frames: 3604480. Throughput: 0: 1063.2. Samples: 899514. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:52:02,225][02958] Avg episode reward: [(0, '20.270')] |
|
[2025-02-16 14:52:07,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 3620864. Throughput: 0: 1047.3. Samples: 905404. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:52:07,227][02958] Avg episode reward: [(0, '20.152')] |
|
[2025-02-16 14:52:12,048][05238] Updated weights for policy 0, policy_version 890 (0.0017) |
|
[2025-02-16 14:52:12,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4300.8, 300 sec: 4262.6). Total num frames: 3645440. Throughput: 0: 1067.7. Samples: 912024. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:52:12,227][02958] Avg episode reward: [(0, '19.866')] |
|
[2025-02-16 14:52:17,221][02958] Fps is (10 sec: 4915.3, 60 sec: 4300.8, 300 sec: 4262.6). Total num frames: 3670016. Throughput: 0: 1069.7. Samples: 915794. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:52:17,227][02958] Avg episode reward: [(0, '20.952')] |
|
[2025-02-16 14:52:22,112][05238] Updated weights for policy 0, policy_version 900 (0.0012) |
|
[2025-02-16 14:52:22,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4232.5, 300 sec: 4234.8). Total num frames: 3686400. Throughput: 0: 1054.5. Samples: 921328. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:52:22,227][02958] Avg episode reward: [(0, '22.541')] |
|
[2025-02-16 14:52:27,221][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4248.7). Total num frames: 3706880. Throughput: 0: 1069.8. Samples: 928190. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:52:27,227][02958] Avg episode reward: [(0, '24.276')] |
|
[2025-02-16 14:52:27,277][05225] Saving new best policy, reward=24.276! |
|
[2025-02-16 14:52:30,731][05238] Updated weights for policy 0, policy_version 910 (0.0027) |
|
[2025-02-16 14:52:32,224][02958] Fps is (10 sec: 4504.2, 60 sec: 4232.3, 300 sec: 4248.7). Total num frames: 3731456. Throughput: 0: 1068.8. Samples: 931718. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:52:32,226][02958] Avg episode reward: [(0, '24.032')] |
|
[2025-02-16 14:52:37,221][02958] Fps is (10 sec: 4095.9, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 3747840. Throughput: 0: 1053.7. Samples: 937048. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:52:37,224][02958] Avg episode reward: [(0, '24.020')] |
|
[2025-02-16 14:52:40,888][05238] Updated weights for policy 0, policy_version 920 (0.0016) |
|
[2025-02-16 14:52:42,221][02958] Fps is (10 sec: 4097.2, 60 sec: 4300.8, 300 sec: 4248.7). Total num frames: 3772416. Throughput: 0: 1073.5. Samples: 944186. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:52:42,228][02958] Avg episode reward: [(0, '23.877')] |
|
[2025-02-16 14:52:47,221][02958] Fps is (10 sec: 4915.3, 60 sec: 4300.8, 300 sec: 4248.7). Total num frames: 3796992. Throughput: 0: 1073.6. Samples: 947826. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:52:47,223][02958] Avg episode reward: [(0, '23.470')] |
|
[2025-02-16 14:52:50,984][05238] Updated weights for policy 0, policy_version 930 (0.0023) |
|
[2025-02-16 14:52:52,221][02958] Fps is (10 sec: 4096.1, 60 sec: 4232.5, 300 sec: 4234.8). Total num frames: 3813376. Throughput: 0: 1059.4. Samples: 953076. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:52:52,223][02958] Avg episode reward: [(0, '23.203')] |
|
[2025-02-16 14:52:57,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4300.8, 300 sec: 4262.6). Total num frames: 3837952. Throughput: 0: 1073.2. Samples: 960320. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 14:52:57,224][02958] Avg episode reward: [(0, '23.940')] |
|
[2025-02-16 14:52:57,231][05225] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000937_3837952.pth... |
|
[2025-02-16 14:52:57,366][05225] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000688_2818048.pth |
|
[2025-02-16 14:52:59,552][05238] Updated weights for policy 0, policy_version 940 (0.0019) |
|
[2025-02-16 14:53:02,221][02958] Fps is (10 sec: 4505.3, 60 sec: 4232.5, 300 sec: 4248.7). Total num frames: 3858432. Throughput: 0: 1069.0. Samples: 963900. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 14:53:02,224][02958] Avg episode reward: [(0, '22.257')] |
|
[2025-02-16 14:53:07,221][02958] Fps is (10 sec: 4096.0, 60 sec: 4300.8, 300 sec: 4248.7). Total num frames: 3878912. Throughput: 0: 1061.7. Samples: 969104. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:53:07,223][02958] Avg episode reward: [(0, '20.916')] |
|
[2025-02-16 14:53:09,711][05238] Updated weights for policy 0, policy_version 950 (0.0018) |
|
[2025-02-16 14:53:12,221][02958] Fps is (10 sec: 4505.9, 60 sec: 4300.8, 300 sec: 4262.6). Total num frames: 3903488. Throughput: 0: 1072.5. Samples: 976454. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:53:12,224][02958] Avg episode reward: [(0, '20.575')] |
|
[2025-02-16 14:53:17,221][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4248.7). Total num frames: 3923968. Throughput: 0: 1076.5. Samples: 980156. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 14:53:17,227][02958] Avg episode reward: [(0, '20.843')] |
|
[2025-02-16 14:53:19,709][05238] Updated weights for policy 0, policy_version 960 (0.0021) |
|
[2025-02-16 14:53:22,221][02958] Fps is (10 sec: 3686.2, 60 sec: 4232.5, 300 sec: 4234.8). Total num frames: 3940352. Throughput: 0: 1070.8. Samples: 985236. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:53:22,227][02958] Avg episode reward: [(0, '21.470')] |
|
[2025-02-16 14:53:27,223][02958] Fps is (10 sec: 4095.3, 60 sec: 4300.7, 300 sec: 4262.6). Total num frames: 3964928. Throughput: 0: 1074.0. Samples: 992516. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 14:53:27,232][02958] Avg episode reward: [(0, '21.523')] |
|
[2025-02-16 14:53:28,251][05238] Updated weights for policy 0, policy_version 970 (0.0029) |
|
[2025-02-16 14:53:32,221][02958] Fps is (10 sec: 4505.9, 60 sec: 4232.8, 300 sec: 4248.7). Total num frames: 3985408. Throughput: 0: 1072.8. Samples: 996102. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 14:53:32,227][02958] Avg episode reward: [(0, '23.046')] |
|
[2025-02-16 14:53:36,729][05225] Stopping Batcher_0... |
|
[2025-02-16 14:53:36,730][05225] Loop batcher_evt_loop terminating... |
|
[2025-02-16 14:53:36,732][05225] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-02-16 14:53:36,735][02958] Component Batcher_0 stopped! |
|
[2025-02-16 14:53:36,797][05238] Weights refcount: 2 0 |
|
[2025-02-16 14:53:36,801][05238] Stopping InferenceWorker_p0-w0... |
|
[2025-02-16 14:53:36,802][05238] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-02-16 14:53:36,801][02958] Component InferenceWorker_p0-w0 stopped! |
|
[2025-02-16 14:53:36,872][05225] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000812_3325952.pth |
|
[2025-02-16 14:53:36,888][05225] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-02-16 14:53:37,054][05225] Stopping LearnerWorker_p0... |
|
[2025-02-16 14:53:37,054][05225] Loop learner_proc0_evt_loop terminating... |
|
[2025-02-16 14:53:37,054][02958] Component LearnerWorker_p0 stopped! |
|
[2025-02-16 14:53:37,062][05241] Stopping RolloutWorker_w2... |
|
[2025-02-16 14:53:37,065][05241] Loop rollout_proc2_evt_loop terminating... |
|
[2025-02-16 14:53:37,062][02958] Component RolloutWorker_w2 stopped! |
|
[2025-02-16 14:53:37,088][02958] Component RolloutWorker_w0 stopped! |
|
[2025-02-16 14:53:37,088][05239] Stopping RolloutWorker_w0... |
|
[2025-02-16 14:53:37,093][05239] Loop rollout_proc0_evt_loop terminating... |
|
[2025-02-16 14:53:37,125][02958] Component RolloutWorker_w6 stopped! |
|
[2025-02-16 14:53:37,125][05245] Stopping RolloutWorker_w6... |
|
[2025-02-16 14:53:37,129][05245] Loop rollout_proc6_evt_loop terminating... |
|
[2025-02-16 14:53:37,155][05246] Stopping RolloutWorker_w7... |
|
[2025-02-16 14:53:37,155][02958] Component RolloutWorker_w7 stopped! |
|
[2025-02-16 14:53:37,158][05246] Loop rollout_proc7_evt_loop terminating... |
|
[2025-02-16 14:53:37,168][05240] Stopping RolloutWorker_w1... |
|
[2025-02-16 14:53:37,169][05244] Stopping RolloutWorker_w5... |
|
[2025-02-16 14:53:37,170][05244] Loop rollout_proc5_evt_loop terminating... |
|
[2025-02-16 14:53:37,167][02958] Component RolloutWorker_w1 stopped! |
|
[2025-02-16 14:53:37,168][05240] Loop rollout_proc1_evt_loop terminating... |
|
[2025-02-16 14:53:37,173][02958] Component RolloutWorker_w5 stopped! |
|
[2025-02-16 14:53:37,212][05243] Stopping RolloutWorker_w3... |
|
[2025-02-16 14:53:37,214][05243] Loop rollout_proc3_evt_loop terminating... |
|
[2025-02-16 14:53:37,212][02958] Component RolloutWorker_w3 stopped! |
|
[2025-02-16 14:53:37,229][02958] Component RolloutWorker_w4 stopped! |
|
[2025-02-16 14:53:37,231][02958] Waiting for process learner_proc0 to stop... |
|
[2025-02-16 14:53:37,229][05242] Stopping RolloutWorker_w4... |
|
[2025-02-16 14:53:37,235][05242] Loop rollout_proc4_evt_loop terminating... |
|
[2025-02-16 14:53:38,970][02958] Waiting for process inference_proc0-0 to join... |
|
[2025-02-16 14:53:38,986][02958] Waiting for process rollout_proc0 to join... |
|
[2025-02-16 14:53:41,060][02958] Waiting for process rollout_proc1 to join... |
|
[2025-02-16 14:53:41,064][02958] Waiting for process rollout_proc2 to join... |
|
[2025-02-16 14:53:41,069][02958] Waiting for process rollout_proc3 to join... |
|
[2025-02-16 14:53:41,071][02958] Waiting for process rollout_proc4 to join... |
|
[2025-02-16 14:53:41,080][02958] Waiting for process rollout_proc5 to join... |
|
[2025-02-16 14:53:41,083][02958] Waiting for process rollout_proc6 to join... |
|
[2025-02-16 14:53:41,085][02958] Waiting for process rollout_proc7 to join... |
|
[2025-02-16 14:53:41,086][02958] Batcher 0 profile tree view: |
|
batching: 26.0965, releasing_batches: 0.0263 |
|
[2025-02-16 14:53:41,088][02958] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 371.3402 |
|
update_model: 8.1085 |
|
weight_update: 0.0022 |
|
one_step: 0.0086 |
|
handle_policy_step: 557.2889 |
|
deserialize: 13.4705, stack: 2.9367, obs_to_device_normalize: 118.8774, forward: 284.4291, send_messages: 27.3757 |
|
prepare_outputs: 86.1898 |
|
to_cpu: 53.8686 |
|
[2025-02-16 14:53:41,089][02958] Learner 0 profile tree view: |
|
misc: 0.0048, prepare_batch: 12.5577 |
|
train: 71.6181 |
|
epoch_init: 0.0051, minibatch_init: 0.0057, losses_postprocess: 0.6377, kl_divergence: 0.6817, after_optimizer: 33.3935 |
|
calculate_losses: 24.9330 |
|
losses_init: 0.0048, forward_head: 1.3216, bptt_initial: 16.4950, tail: 1.0610, advantages_returns: 0.2838, losses: 3.5414 |
|
bptt: 1.9336 |
|
bptt_forward_core: 1.8308 |
|
update: 11.3640 |
|
clip: 0.8522 |
|
[2025-02-16 14:53:41,091][02958] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.2544, enqueue_policy_requests: 84.8000, env_step: 772.0301, overhead: 10.3802, complete_rollouts: 6.8933 |
|
save_policy_outputs: 16.8582 |
|
split_output_tensors: 6.6159 |
|
[2025-02-16 14:53:41,092][02958] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.2766, enqueue_policy_requests: 86.9453, env_step: 772.9186, overhead: 10.9014, complete_rollouts: 6.7027 |
|
save_policy_outputs: 16.9195 |
|
split_output_tensors: 6.6486 |
|
[2025-02-16 14:53:41,093][02958] Loop Runner_EvtLoop terminating... |
|
[2025-02-16 14:53:41,095][02958] Runner profile tree view: |
|
main_loop: 998.8171 |
|
[2025-02-16 14:53:41,096][02958] Collected {0: 4005888}, FPS: 4010.6 |
|
[2025-02-16 14:55:02,972][02958] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-16 14:55:02,974][02958] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-02-16 14:55:02,976][02958] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-02-16 14:55:02,978][02958] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-02-16 14:55:02,980][02958] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-16 14:55:02,982][02958] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-02-16 14:55:02,984][02958] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-16 14:55:02,984][02958] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-02-16 14:55:02,985][02958] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-02-16 14:55:02,986][02958] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-02-16 14:55:02,987][02958] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-02-16 14:55:02,988][02958] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-02-16 14:55:02,989][02958] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-02-16 14:55:02,990][02958] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-02-16 14:55:02,992][02958] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-02-16 14:55:03,025][02958] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 14:55:03,029][02958] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-16 14:55:03,031][02958] RunningMeanStd input shape: (1,) |
|
[2025-02-16 14:55:03,046][02958] ConvEncoder: input_channels=3 |
|
[2025-02-16 14:55:03,160][02958] Conv encoder output size: 512 |
|
[2025-02-16 14:55:03,162][02958] Policy head output size: 512 |
|
[2025-02-16 14:55:03,440][02958] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-02-16 14:55:04,502][02958] Num frames 100... |
|
[2025-02-16 14:55:04,669][02958] Num frames 200... |
|
[2025-02-16 14:55:04,840][02958] Num frames 300... |
|
[2025-02-16 14:55:05,034][02958] Num frames 400... |
|
[2025-02-16 14:55:05,218][02958] Num frames 500... |
|
[2025-02-16 14:55:05,394][02958] Num frames 600... |
|
[2025-02-16 14:55:05,575][02958] Num frames 700... |
|
[2025-02-16 14:55:05,710][02958] Num frames 800... |
|
[2025-02-16 14:55:05,763][02958] Avg episode rewards: #0: 20.000, true rewards: #0: 8.000 |
|
[2025-02-16 14:55:05,764][02958] Avg episode reward: 20.000, avg true_objective: 8.000 |
|
[2025-02-16 14:55:05,894][02958] Num frames 900... |
|
[2025-02-16 14:55:06,028][02958] Num frames 1000... |
|
[2025-02-16 14:55:06,161][02958] Num frames 1100... |
|
[2025-02-16 14:55:06,292][02958] Num frames 1200... |
|
[2025-02-16 14:55:06,368][02958] Avg episode rewards: #0: 13.080, true rewards: #0: 6.080 |
|
[2025-02-16 14:55:06,369][02958] Avg episode reward: 13.080, avg true_objective: 6.080 |
|
[2025-02-16 14:55:06,483][02958] Num frames 1300... |
|
[2025-02-16 14:55:06,611][02958] Num frames 1400... |
|
[2025-02-16 14:55:06,740][02958] Num frames 1500... |
|
[2025-02-16 14:55:06,866][02958] Num frames 1600... |
|
[2025-02-16 14:55:06,994][02958] Num frames 1700... |
|
[2025-02-16 14:55:07,137][02958] Num frames 1800... |
|
[2025-02-16 14:55:07,265][02958] Num frames 1900... |
|
[2025-02-16 14:55:07,396][02958] Num frames 2000... |
|
[2025-02-16 14:55:07,524][02958] Num frames 2100... |
|
[2025-02-16 14:55:07,652][02958] Num frames 2200... |
|
[2025-02-16 14:55:07,783][02958] Num frames 2300... |
|
[2025-02-16 14:55:07,911][02958] Num frames 2400... |
|
[2025-02-16 14:55:08,038][02958] Num frames 2500... |
|
[2025-02-16 14:55:08,226][02958] Avg episode rewards: #0: 19.640, true rewards: #0: 8.640 |
|
[2025-02-16 14:55:08,227][02958] Avg episode reward: 19.640, avg true_objective: 8.640 |
|
[2025-02-16 14:55:08,242][02958] Num frames 2600... |
|
[2025-02-16 14:55:08,369][02958] Num frames 2700... |
|
[2025-02-16 14:55:08,497][02958] Num frames 2800... |
|
[2025-02-16 14:55:08,627][02958] Num frames 2900... |
|
[2025-02-16 14:55:08,758][02958] Num frames 3000... |
|
[2025-02-16 14:55:08,889][02958] Num frames 3100... |
|
[2025-02-16 14:55:09,017][02958] Num frames 3200... |
|
[2025-02-16 14:55:09,160][02958] Num frames 3300... |
|
[2025-02-16 14:55:09,291][02958] Num frames 3400... |
|
[2025-02-16 14:55:09,422][02958] Num frames 3500... |
|
[2025-02-16 14:55:09,554][02958] Num frames 3600... |
|
[2025-02-16 14:55:09,682][02958] Num frames 3700... |
|
[2025-02-16 14:55:09,816][02958] Num frames 3800... |
|
[2025-02-16 14:55:09,945][02958] Num frames 3900... |
|
[2025-02-16 14:55:10,073][02958] Num frames 4000... |
|
[2025-02-16 14:55:10,217][02958] Num frames 4100... |
|
[2025-02-16 14:55:10,351][02958] Num frames 4200... |
|
[2025-02-16 14:55:10,442][02958] Avg episode rewards: #0: 24.060, true rewards: #0: 10.560 |
|
[2025-02-16 14:55:10,443][02958] Avg episode reward: 24.060, avg true_objective: 10.560 |
|
[2025-02-16 14:55:10,545][02958] Num frames 4300... |
|
[2025-02-16 14:55:10,672][02958] Num frames 4400... |
|
[2025-02-16 14:55:10,801][02958] Num frames 4500... |
|
[2025-02-16 14:55:10,927][02958] Num frames 4600... |
|
[2025-02-16 14:55:11,054][02958] Num frames 4700... |
|
[2025-02-16 14:55:11,200][02958] Num frames 4800... |
|
[2025-02-16 14:55:11,332][02958] Num frames 4900... |
|
[2025-02-16 14:55:11,462][02958] Num frames 5000... |
|
[2025-02-16 14:55:11,590][02958] Num frames 5100... |
|
[2025-02-16 14:55:11,721][02958] Num frames 5200... |
|
[2025-02-16 14:55:11,850][02958] Num frames 5300... |
|
[2025-02-16 14:55:11,982][02958] Num frames 5400... |
|
[2025-02-16 14:55:12,114][02958] Num frames 5500... |
|
[2025-02-16 14:55:12,253][02958] Num frames 5600... |
|
[2025-02-16 14:55:12,382][02958] Num frames 5700... |
|
[2025-02-16 14:55:12,522][02958] Num frames 5800... |
|
[2025-02-16 14:55:12,651][02958] Num frames 5900... |
|
[2025-02-16 14:55:12,780][02958] Num frames 6000... |
|
[2025-02-16 14:55:12,935][02958] Avg episode rewards: #0: 28.560, true rewards: #0: 12.160 |
|
[2025-02-16 14:55:12,937][02958] Avg episode reward: 28.560, avg true_objective: 12.160 |
|
[2025-02-16 14:55:12,966][02958] Num frames 6100... |
|
[2025-02-16 14:55:13,096][02958] Num frames 6200... |
|
[2025-02-16 14:55:13,241][02958] Num frames 6300... |
|
[2025-02-16 14:55:13,378][02958] Num frames 6400... |
|
[2025-02-16 14:55:13,510][02958] Num frames 6500... |
|
[2025-02-16 14:55:13,641][02958] Avg episode rewards: #0: 25.100, true rewards: #0: 10.933 |
|
[2025-02-16 14:55:13,643][02958] Avg episode reward: 25.100, avg true_objective: 10.933 |
|
[2025-02-16 14:55:13,695][02958] Num frames 6600... |
|
[2025-02-16 14:55:13,822][02958] Num frames 6700... |
|
[2025-02-16 14:55:13,949][02958] Num frames 6800... |
|
[2025-02-16 14:55:14,080][02958] Num frames 6900... |
|
[2025-02-16 14:55:14,223][02958] Num frames 7000... |
|
[2025-02-16 14:55:14,374][02958] Avg episode rewards: #0: 22.674, true rewards: #0: 10.103 |
|
[2025-02-16 14:55:14,376][02958] Avg episode reward: 22.674, avg true_objective: 10.103 |
|
[2025-02-16 14:55:14,415][02958] Num frames 7100... |
|
[2025-02-16 14:55:14,541][02958] Num frames 7200... |
|
[2025-02-16 14:55:14,669][02958] Num frames 7300... |
|
[2025-02-16 14:55:14,795][02958] Num frames 7400... |
|
[2025-02-16 14:55:14,925][02958] Num frames 7500... |
|
[2025-02-16 14:55:15,054][02958] Num frames 7600... |
|
[2025-02-16 14:55:15,186][02958] Num frames 7700... |
|
[2025-02-16 14:55:15,322][02958] Num frames 7800... |
|
[2025-02-16 14:55:15,450][02958] Num frames 7900... |
|
[2025-02-16 14:55:15,584][02958] Num frames 8000... |
|
[2025-02-16 14:55:15,766][02958] Num frames 8100... |
|
[2025-02-16 14:55:15,935][02958] Num frames 8200... |
|
[2025-02-16 14:55:16,137][02958] Avg episode rewards: #0: 23.110, true rewards: #0: 10.360 |
|
[2025-02-16 14:55:16,139][02958] Avg episode reward: 23.110, avg true_objective: 10.360 |
|
[2025-02-16 14:55:16,163][02958] Num frames 8300... |
|
[2025-02-16 14:55:16,340][02958] Num frames 8400... |
|
[2025-02-16 14:55:16,515][02958] Num frames 8500... |
|
[2025-02-16 14:55:16,680][02958] Num frames 8600... |
|
[2025-02-16 14:55:16,848][02958] Num frames 8700... |
|
[2025-02-16 14:55:16,914][02958] Avg episode rewards: #0: 21.227, true rewards: #0: 9.671 |
|
[2025-02-16 14:55:16,916][02958] Avg episode reward: 21.227, avg true_objective: 9.671 |
|
[2025-02-16 14:55:17,088][02958] Num frames 8800... |
|
[2025-02-16 14:55:17,270][02958] Num frames 8900... |
|
[2025-02-16 14:55:17,455][02958] Num frames 9000... |
|
[2025-02-16 14:55:17,636][02958] Num frames 9100... |
|
[2025-02-16 14:55:17,766][02958] Num frames 9200... |
|
[2025-02-16 14:55:17,894][02958] Num frames 9300... |
|
[2025-02-16 14:55:18,024][02958] Num frames 9400... |
|
[2025-02-16 14:55:18,131][02958] Avg episode rewards: #0: 20.640, true rewards: #0: 9.440 |
|
[2025-02-16 14:55:18,133][02958] Avg episode reward: 20.640, avg true_objective: 9.440 |
|
[2025-02-16 14:56:10,337][02958] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-02-16 14:59:27,372][02958] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-16 14:59:27,374][02958] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-02-16 14:59:27,375][02958] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-02-16 14:59:27,378][02958] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-02-16 14:59:27,379][02958] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-16 14:59:27,381][02958] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-02-16 14:59:27,383][02958] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-02-16 14:59:27,384][02958] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-02-16 14:59:27,385][02958] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-02-16 14:59:27,386][02958] Adding new argument 'hf_repository'='AndiB93/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-02-16 14:59:27,387][02958] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-02-16 14:59:27,388][02958] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-02-16 14:59:27,389][02958] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-02-16 14:59:27,390][02958] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-02-16 14:59:27,391][02958] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-02-16 14:59:27,421][02958] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-16 14:59:27,423][02958] RunningMeanStd input shape: (1,) |
|
[2025-02-16 14:59:27,436][02958] ConvEncoder: input_channels=3 |
|
[2025-02-16 14:59:27,471][02958] Conv encoder output size: 512 |
|
[2025-02-16 14:59:27,472][02958] Policy head output size: 512 |
|
[2025-02-16 14:59:27,492][02958] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-02-16 14:59:27,946][02958] Num frames 100... |
|
[2025-02-16 14:59:28,102][02958] Num frames 200... |
|
[2025-02-16 14:59:28,262][02958] Num frames 300... |
|
[2025-02-16 14:59:28,389][02958] Num frames 400... |
|
[2025-02-16 14:59:28,521][02958] Num frames 500... |
|
[2025-02-16 14:59:28,657][02958] Num frames 600... |
|
[2025-02-16 14:59:28,803][02958] Num frames 700... |
|
[2025-02-16 14:59:28,931][02958] Num frames 800... |
|
[2025-02-16 14:59:29,059][02958] Num frames 900... |
|
[2025-02-16 14:59:29,197][02958] Num frames 1000... |
|
[2025-02-16 14:59:29,328][02958] Num frames 1100... |
|
[2025-02-16 14:59:29,430][02958] Avg episode rewards: #0: 28.340, true rewards: #0: 11.340 |
|
[2025-02-16 14:59:29,432][02958] Avg episode reward: 28.340, avg true_objective: 11.340 |
|
[2025-02-16 14:59:29,521][02958] Num frames 1200... |
|
[2025-02-16 14:59:29,648][02958] Num frames 1300... |
|
[2025-02-16 14:59:29,783][02958] Num frames 1400... |
|
[2025-02-16 14:59:29,912][02958] Num frames 1500... |
|
[2025-02-16 14:59:30,074][02958] Avg episode rewards: #0: 16.910, true rewards: #0: 7.910 |
|
[2025-02-16 14:59:30,075][02958] Avg episode reward: 16.910, avg true_objective: 7.910 |
|
[2025-02-16 14:59:30,107][02958] Num frames 1600... |
|
[2025-02-16 14:59:30,237][02958] Num frames 1700... |
|
[2025-02-16 14:59:30,362][02958] Num frames 1800... |
|
[2025-02-16 14:59:30,500][02958] Num frames 1900... |
|
[2025-02-16 14:59:30,681][02958] Num frames 2000... |
|
[2025-02-16 14:59:30,858][02958] Num frames 2100... |
|
[2025-02-16 14:59:31,024][02958] Num frames 2200... |
|
[2025-02-16 14:59:31,184][02958] Avg episode rewards: #0: 14.847, true rewards: #0: 7.513 |
|
[2025-02-16 14:59:31,186][02958] Avg episode reward: 14.847, avg true_objective: 7.513 |
|
[2025-02-16 14:59:31,265][02958] Num frames 2300... |
|
[2025-02-16 14:59:31,434][02958] Num frames 2400... |
|
[2025-02-16 14:59:31,598][02958] Num frames 2500... |
|
[2025-02-16 14:59:31,767][02958] Num frames 2600... |
|
[2025-02-16 14:59:31,951][02958] Num frames 2700... |
|
[2025-02-16 14:59:32,015][02958] Avg episode rewards: #0: 13.755, true rewards: #0: 6.755 |
|
[2025-02-16 14:59:32,016][02958] Avg episode reward: 13.755, avg true_objective: 6.755 |
|
[2025-02-16 14:59:32,196][02958] Num frames 2800... |
|
[2025-02-16 14:59:32,379][02958] Num frames 2900... |
|
[2025-02-16 14:59:32,567][02958] Num frames 3000... |
|
[2025-02-16 14:59:32,701][02958] Num frames 3100... |
|
[2025-02-16 14:59:32,833][02958] Num frames 3200... |
|
[2025-02-16 14:59:32,971][02958] Num frames 3300... |
|
[2025-02-16 14:59:33,105][02958] Num frames 3400... |
|
[2025-02-16 14:59:33,235][02958] Num frames 3500... |
|
[2025-02-16 14:59:33,364][02958] Num frames 3600... |
|
[2025-02-16 14:59:33,540][02958] Avg episode rewards: #0: 14.788, true rewards: #0: 7.388 |
|
[2025-02-16 14:59:33,541][02958] Avg episode reward: 14.788, avg true_objective: 7.388 |
|
[2025-02-16 14:59:33,551][02958] Num frames 3700... |
|
[2025-02-16 14:59:33,682][02958] Num frames 3800... |
|
[2025-02-16 14:59:33,810][02958] Num frames 3900... |
|
[2025-02-16 14:59:33,955][02958] Num frames 4000... |
|
[2025-02-16 14:59:34,085][02958] Num frames 4100... |
|
[2025-02-16 14:59:34,223][02958] Num frames 4200... |
|
[2025-02-16 14:59:34,326][02958] Avg episode rewards: #0: 13.897, true rewards: #0: 7.063 |
|
[2025-02-16 14:59:34,328][02958] Avg episode reward: 13.897, avg true_objective: 7.063 |
|
[2025-02-16 14:59:34,410][02958] Num frames 4300... |
|
[2025-02-16 14:59:34,541][02958] Num frames 4400... |
|
[2025-02-16 14:59:34,668][02958] Num frames 4500... |
|
[2025-02-16 14:59:34,798][02958] Num frames 4600... |
|
[2025-02-16 14:59:34,933][02958] Num frames 4700... |
|
[2025-02-16 14:59:35,063][02958] Num frames 4800... |
|
[2025-02-16 14:59:35,198][02958] Num frames 4900... |
|
[2025-02-16 14:59:35,329][02958] Num frames 5000... |
|
[2025-02-16 14:59:35,458][02958] Num frames 5100... |
|
[2025-02-16 14:59:35,596][02958] Num frames 5200... |
|
[2025-02-16 14:59:35,726][02958] Num frames 5300... |
|
[2025-02-16 14:59:35,855][02958] Num frames 5400... |
|
[2025-02-16 14:59:35,939][02958] Avg episode rewards: #0: 15.460, true rewards: #0: 7.746 |
|
[2025-02-16 14:59:35,941][02958] Avg episode reward: 15.460, avg true_objective: 7.746 |
|
[2025-02-16 14:59:36,045][02958] Num frames 5500... |
|
[2025-02-16 14:59:36,182][02958] Num frames 5600... |
|
[2025-02-16 14:59:36,314][02958] Num frames 5700... |
|
[2025-02-16 14:59:36,441][02958] Num frames 5800... |
|
[2025-02-16 14:59:36,569][02958] Num frames 5900... |
|
[2025-02-16 14:59:36,699][02958] Num frames 6000... |
|
[2025-02-16 14:59:36,829][02958] Num frames 6100... |
|
[2025-02-16 14:59:36,958][02958] Num frames 6200... |
|
[2025-02-16 14:59:37,061][02958] Avg episode rewards: #0: 15.913, true rewards: #0: 7.787 |
|
[2025-02-16 14:59:37,062][02958] Avg episode reward: 15.913, avg true_objective: 7.787 |
|
[2025-02-16 14:59:37,160][02958] Num frames 6300... |
|
[2025-02-16 14:59:37,289][02958] Num frames 6400... |
|
[2025-02-16 14:59:37,416][02958] Num frames 6500... |
|
[2025-02-16 14:59:37,577][02958] Avg episode rewards: #0: 14.647, true rewards: #0: 7.313 |
|
[2025-02-16 14:59:37,579][02958] Avg episode reward: 14.647, avg true_objective: 7.313 |
|
[2025-02-16 14:59:37,604][02958] Num frames 6600... |
|
[2025-02-16 14:59:37,733][02958] Num frames 6700... |
|
[2025-02-16 14:59:37,861][02958] Num frames 6800... |
|
[2025-02-16 14:59:37,998][02958] Num frames 6900... |
|
[2025-02-16 14:59:38,135][02958] Num frames 7000... |
|
[2025-02-16 14:59:38,277][02958] Num frames 7100... |
|
[2025-02-16 14:59:38,408][02958] Num frames 7200... |
|
[2025-02-16 14:59:38,538][02958] Num frames 7300... |
|
[2025-02-16 14:59:38,617][02958] Avg episode rewards: #0: 14.618, true rewards: #0: 7.318 |
|
[2025-02-16 14:59:38,620][02958] Avg episode reward: 14.618, avg true_objective: 7.318 |
|
[2025-02-16 15:00:18,812][02958] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-02-16 15:00:35,541][02958] The model has been pushed to https://huggingface.co/AndiB93/rl_course_vizdoom_health_gathering_supreme |
|
[2025-02-16 15:04:28,258][02958] Environment doom_basic already registered, overwriting... |
|
[2025-02-16 15:04:28,264][02958] Environment doom_two_colors_easy already registered, overwriting... |
|
[2025-02-16 15:04:28,266][02958] Environment doom_two_colors_hard already registered, overwriting... |
|
[2025-02-16 15:04:28,267][02958] Environment doom_dm already registered, overwriting... |
|
[2025-02-16 15:04:28,269][02958] Environment doom_dwango5 already registered, overwriting... |
|
[2025-02-16 15:04:28,270][02958] Environment doom_my_way_home_flat_actions already registered, overwriting... |
|
[2025-02-16 15:04:28,272][02958] Environment doom_defend_the_center_flat_actions already registered, overwriting... |
|
[2025-02-16 15:04:28,273][02958] Environment doom_my_way_home already registered, overwriting... |
|
[2025-02-16 15:04:28,274][02958] Environment doom_deadly_corridor already registered, overwriting... |
|
[2025-02-16 15:04:28,275][02958] Environment doom_defend_the_center already registered, overwriting... |
|
[2025-02-16 15:04:28,277][02958] Environment doom_defend_the_line already registered, overwriting... |
|
[2025-02-16 15:04:28,278][02958] Environment doom_health_gathering already registered, overwriting... |
|
[2025-02-16 15:04:28,279][02958] Environment doom_health_gathering_supreme already registered, overwriting... |
|
[2025-02-16 15:04:28,287][02958] Environment doom_battle already registered, overwriting... |
|
[2025-02-16 15:04:28,289][02958] Environment doom_battle2 already registered, overwriting... |
|
[2025-02-16 15:04:28,290][02958] Environment doom_duel_bots already registered, overwriting... |
|
[2025-02-16 15:04:28,291][02958] Environment doom_deathmatch_bots already registered, overwriting... |
|
[2025-02-16 15:04:28,293][02958] Environment doom_duel already registered, overwriting... |
|
[2025-02-16 15:04:28,296][02958] Environment doom_deathmatch_full already registered, overwriting... |
|
[2025-02-16 15:04:28,297][02958] Environment doom_benchmark already registered, overwriting... |
|
[2025-02-16 15:04:28,297][02958] register_encoder_factory: <function make_vizdoom_encoder at 0x7f6ce225d6c0> |
|
[2025-02-16 15:04:28,306][02958] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-16 15:04:28,314][02958] Overriding arg 'train_for_env_steps' with value 6000000 passed from command line |
|
[2025-02-16 15:04:28,320][02958] Experiment dir /content/train_dir/default_experiment already exists! |
|
[2025-02-16 15:04:28,321][02958] Resuming existing experiment from /content/train_dir/default_experiment... |
|
[2025-02-16 15:04:28,324][02958] Weights and Biases integration disabled |
|
[2025-02-16 15:04:28,327][02958] Environment var CUDA_VISIBLE_DEVICES is 0 |
|
|
|
[2025-02-16 15:04:31,336][02958] Starting experiment with the following configuration: |
|
help=False |
|
algo=APPO |
|
env=doom_health_gathering_supreme |
|
experiment=default_experiment |
|
train_dir=/content/train_dir |
|
restart_behavior=resume |
|
device=gpu |
|
seed=None |
|
num_policies=1 |
|
async_rl=True |
|
serial_mode=False |
|
batched_sampling=False |
|
num_batches_to_accumulate=2 |
|
worker_num_splits=2 |
|
policy_workers_per_policy=1 |
|
max_policy_lag=1000 |
|
num_workers=8 |
|
num_envs_per_worker=4 |
|
batch_size=1024 |
|
num_batches_per_epoch=1 |
|
num_epochs=1 |
|
rollout=32 |
|
recurrence=32 |
|
shuffle_minibatches=False |
|
gamma=0.99 |
|
reward_scale=1.0 |
|
reward_clip=1000.0 |
|
value_bootstrap=False |
|
normalize_returns=True |
|
exploration_loss_coeff=0.001 |
|
value_loss_coeff=0.5 |
|
kl_loss_coeff=0.0 |
|
exploration_loss=symmetric_kl |
|
gae_lambda=0.95 |
|
ppo_clip_ratio=0.1 |
|
ppo_clip_value=0.2 |
|
with_vtrace=False |
|
vtrace_rho=1.0 |
|
vtrace_c=1.0 |
|
optimizer=adam |
|
adam_eps=1e-06 |
|
adam_beta1=0.9 |
|
adam_beta2=0.999 |
|
max_grad_norm=4.0 |
|
learning_rate=0.0001 |
|
lr_schedule=constant |
|
lr_schedule_kl_threshold=0.008 |
|
lr_adaptive_min=1e-06 |
|
lr_adaptive_max=0.01 |
|
obs_subtract_mean=0.0 |
|
obs_scale=255.0 |
|
normalize_input=True |
|
normalize_input_keys=None |
|
decorrelate_experience_max_seconds=0 |
|
decorrelate_envs_on_one_worker=True |
|
actor_worker_gpus=[] |
|
set_workers_cpu_affinity=True |
|
force_envs_single_thread=False |
|
default_niceness=0 |
|
log_to_file=True |
|
experiment_summaries_interval=10 |
|
flush_summaries_interval=30 |
|
stats_avg=100 |
|
summaries_use_frameskip=True |
|
heartbeat_interval=20 |
|
heartbeat_reporting_interval=600 |
|
train_for_env_steps=6000000 |
|
train_for_seconds=10000000000 |
|
save_every_sec=120 |
|
keep_checkpoints=2 |
|
load_checkpoint_kind=latest |
|
save_milestones_sec=-1 |
|
save_best_every_sec=5 |
|
save_best_metric=reward |
|
save_best_after=100000 |
|
benchmark=False |
|
encoder_mlp_layers=[512, 512] |
|
encoder_conv_architecture=convnet_simple |
|
encoder_conv_mlp_layers=[512] |
|
use_rnn=True |
|
rnn_size=512 |
|
rnn_type=gru |
|
rnn_num_layers=1 |
|
decoder_mlp_layers=[] |
|
nonlinearity=elu |
|
policy_initialization=orthogonal |
|
policy_init_gain=1.0 |
|
actor_critic_share_weights=True |
|
adaptive_stddev=True |
|
continuous_tanh_scale=0.0 |
|
initial_stddev=1.0 |
|
use_env_info_cache=False |
|
env_gpu_actions=False |
|
env_gpu_observations=True |
|
env_frameskip=4 |
|
env_framestack=1 |
|
pixel_format=CHW |
|
use_record_episode_statistics=False |
|
with_wandb=False |
|
wandb_user=None |
|
wandb_project=sample_factory |
|
wandb_group=None |
|
wandb_job_type=SF |
|
wandb_tags=[] |
|
with_pbt=False |
|
pbt_mix_policies_in_one_env=True |
|
pbt_period_env_steps=5000000 |
|
pbt_start_mutation=20000000 |
|
pbt_replace_fraction=0.3 |
|
pbt_mutation_rate=0.15 |
|
pbt_replace_reward_gap=0.1 |
|
pbt_replace_reward_gap_absolute=1e-06 |
|
pbt_optimize_gamma=False |
|
pbt_target_objective=true_objective |
|
pbt_perturb_min=1.1 |
|
pbt_perturb_max=1.5 |
|
num_agents=-1 |
|
num_humans=0 |
|
num_bots=-1 |
|
start_bot_difficulty=None |
|
timelimit=None |
|
res_w=128 |
|
res_h=72 |
|
wide_aspect_ratio=False |
|
eval_env_frameskip=1 |
|
fps=35 |
|
command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 |
|
cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} |
|
git_hash=unknown |
|
git_repo_name=not a git repository |
|
[2025-02-16 15:04:31,339][02958] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-02-16 15:04:31,341][02958] Rollout worker 0 uses device cpu |
|
[2025-02-16 15:04:31,343][02958] Rollout worker 1 uses device cpu |
|
[2025-02-16 15:04:31,344][02958] Rollout worker 2 uses device cpu |
|
[2025-02-16 15:04:31,345][02958] Rollout worker 3 uses device cpu |
|
[2025-02-16 15:04:31,346][02958] Rollout worker 4 uses device cpu |
|
[2025-02-16 15:04:31,347][02958] Rollout worker 5 uses device cpu |
|
[2025-02-16 15:04:31,348][02958] Rollout worker 6 uses device cpu |
|
[2025-02-16 15:04:31,351][02958] Rollout worker 7 uses device cpu |
|
[2025-02-16 15:04:31,459][02958] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-16 15:04:31,461][02958] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-02-16 15:04:31,508][02958] Starting all processes... |
|
[2025-02-16 15:04:31,510][02958] Starting process learner_proc0 |
|
[2025-02-16 15:04:31,593][02958] Starting all processes... |
|
[2025-02-16 15:04:31,600][02958] Starting process inference_proc0-0 |
|
[2025-02-16 15:04:31,601][02958] Starting process rollout_proc0 |
|
[2025-02-16 15:04:31,601][02958] Starting process rollout_proc1 |
|
[2025-02-16 15:04:31,601][02958] Starting process rollout_proc2 |
|
[2025-02-16 15:04:31,601][02958] Starting process rollout_proc3 |
|
[2025-02-16 15:04:31,601][02958] Starting process rollout_proc4 |
|
[2025-02-16 15:04:31,601][02958] Starting process rollout_proc5 |
|
[2025-02-16 15:04:31,601][02958] Starting process rollout_proc6 |
|
[2025-02-16 15:04:31,601][02958] Starting process rollout_proc7 |
|
[2025-02-16 15:04:46,450][15349] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-16 15:04:46,451][15349] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-02-16 15:04:46,528][15349] Num visible devices: 1 |
|
[2025-02-16 15:04:46,587][15349] Starting seed is not provided |
|
[2025-02-16 15:04:46,588][15349] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-16 15:04:46,588][15349] Initializing actor-critic model on device cuda:0 |
|
[2025-02-16 15:04:46,588][15349] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-16 15:04:46,590][15349] RunningMeanStd input shape: (1,) |
|
[2025-02-16 15:04:46,656][15349] ConvEncoder: input_channels=3 |
|
[2025-02-16 15:04:47,011][15370] Worker 7 uses CPU cores [1] |
|
[2025-02-16 15:04:47,155][15365] Worker 2 uses CPU cores [0] |
|
[2025-02-16 15:04:47,251][15366] Worker 3 uses CPU cores [1] |
|
[2025-02-16 15:04:47,271][15363] Worker 0 uses CPU cores [0] |
|
[2025-02-16 15:04:47,385][15349] Conv encoder output size: 512 |
|
[2025-02-16 15:04:47,386][15349] Policy head output size: 512 |
|
[2025-02-16 15:04:47,404][15364] Worker 1 uses CPU cores [1] |
|
[2025-02-16 15:04:47,411][15362] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-16 15:04:47,411][15362] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-02-16 15:04:47,449][15367] Worker 4 uses CPU cores [0] |
|
[2025-02-16 15:04:47,479][15349] Created Actor Critic model with architecture: |
|
[2025-02-16 15:04:47,479][15349] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-02-16 15:04:47,487][15368] Worker 5 uses CPU cores [1] |
|
[2025-02-16 15:04:47,492][15362] Num visible devices: 1 |
|
[2025-02-16 15:04:47,515][15369] Worker 6 uses CPU cores [0] |
|
[2025-02-16 15:04:47,698][15349] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-02-16 15:04:48,639][15349] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-02-16 15:04:48,679][15349] Loading model from checkpoint |
|
[2025-02-16 15:04:48,681][15349] Loaded experiment state at self.train_step=978, self.env_steps=4005888 |
|
[2025-02-16 15:04:48,681][15349] Initialized policy 0 weights for model version 978 |
|
[2025-02-16 15:04:48,684][15349] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-16 15:04:48,691][15349] LearnerWorker_p0 finished initialization! |
|
[2025-02-16 15:04:48,905][15362] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-16 15:04:48,906][15362] RunningMeanStd input shape: (1,) |
|
[2025-02-16 15:04:48,918][15362] ConvEncoder: input_channels=3 |
|
[2025-02-16 15:04:49,018][15362] Conv encoder output size: 512 |
|
[2025-02-16 15:04:49,018][15362] Policy head output size: 512 |
|
[2025-02-16 15:04:49,052][02958] Inference worker 0-0 is ready! |
|
[2025-02-16 15:04:49,053][02958] All inference workers are ready! Signal rollout workers to start! |
|
[2025-02-16 15:04:49,306][15364] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 15:04:49,322][15369] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 15:04:49,329][15363] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 15:04:49,339][15365] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 15:04:49,349][15368] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 15:04:49,391][15370] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 15:04:49,433][15366] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 15:04:49,431][15367] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-16 15:04:50,339][15364] Decorrelating experience for 0 frames... |
|
[2025-02-16 15:04:50,384][15366] Decorrelating experience for 0 frames... |
|
[2025-02-16 15:04:50,636][15369] Decorrelating experience for 0 frames... |
|
[2025-02-16 15:04:50,652][15365] Decorrelating experience for 0 frames... |
|
[2025-02-16 15:04:50,689][15367] Decorrelating experience for 0 frames... |
|
[2025-02-16 15:04:50,843][15364] Decorrelating experience for 32 frames... |
|
[2025-02-16 15:04:51,245][15366] Decorrelating experience for 32 frames... |
|
[2025-02-16 15:04:51,448][02958] Heartbeat connected on Batcher_0 |
|
[2025-02-16 15:04:51,453][02958] Heartbeat connected on LearnerWorker_p0 |
|
[2025-02-16 15:04:51,487][02958] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-02-16 15:04:51,745][15370] Decorrelating experience for 0 frames... |
|
[2025-02-16 15:04:52,041][15369] Decorrelating experience for 32 frames... |
|
[2025-02-16 15:04:52,063][15365] Decorrelating experience for 32 frames... |
|
[2025-02-16 15:04:52,147][15367] Decorrelating experience for 32 frames... |
|
[2025-02-16 15:04:52,186][15370] Decorrelating experience for 32 frames... |
|
[2025-02-16 15:04:52,422][15363] Decorrelating experience for 0 frames... |
|
[2025-02-16 15:04:53,173][15364] Decorrelating experience for 64 frames... |
|
[2025-02-16 15:04:53,327][02958] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-02-16 15:04:53,367][15366] Decorrelating experience for 64 frames... |
|
[2025-02-16 15:04:53,590][15363] Decorrelating experience for 32 frames... |
|
[2025-02-16 15:04:53,662][15369] Decorrelating experience for 64 frames... |
|
[2025-02-16 15:04:53,678][15365] Decorrelating experience for 64 frames... |
|
[2025-02-16 15:04:53,677][15368] Decorrelating experience for 0 frames... |
|
[2025-02-16 15:04:54,463][15364] Decorrelating experience for 96 frames... |
|
[2025-02-16 15:04:54,648][02958] Heartbeat connected on RolloutWorker_w1 |
|
[2025-02-16 15:04:54,685][15366] Decorrelating experience for 96 frames... |
|
[2025-02-16 15:04:54,957][02958] Heartbeat connected on RolloutWorker_w3 |
|
[2025-02-16 15:04:54,973][15370] Decorrelating experience for 64 frames... |
|
[2025-02-16 15:04:55,259][15367] Decorrelating experience for 64 frames... |
|
[2025-02-16 15:04:55,363][15369] Decorrelating experience for 96 frames... |
|
[2025-02-16 15:04:55,367][15365] Decorrelating experience for 96 frames... |
|
[2025-02-16 15:04:55,815][02958] Heartbeat connected on RolloutWorker_w2 |
|
[2025-02-16 15:04:55,828][02958] Heartbeat connected on RolloutWorker_w6 |
|
[2025-02-16 15:04:57,348][15363] Decorrelating experience for 64 frames... |
|
[2025-02-16 15:04:57,944][15370] Decorrelating experience for 96 frames... |
|
[2025-02-16 15:04:58,327][02958] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 234.8. Samples: 1174. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-02-16 15:04:58,329][02958] Avg episode reward: [(0, '1.780')] |
|
[2025-02-16 15:04:58,780][02958] Heartbeat connected on RolloutWorker_w7 |
|
[2025-02-16 15:05:00,056][15367] Decorrelating experience for 96 frames... |
|
[2025-02-16 15:05:00,927][02958] Heartbeat connected on RolloutWorker_w4 |
|
[2025-02-16 15:05:02,187][15349] Signal inference workers to stop experience collection... |
|
[2025-02-16 15:05:02,196][15362] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-02-16 15:05:02,262][15368] Decorrelating experience for 32 frames... |
|
[2025-02-16 15:05:02,530][15349] Signal inference workers to resume experience collection... |
|
[2025-02-16 15:05:02,533][15362] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-02-16 15:05:02,687][15363] Decorrelating experience for 96 frames... |
|
[2025-02-16 15:05:03,197][02958] Heartbeat connected on RolloutWorker_w0 |
|
[2025-02-16 15:05:03,327][02958] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4014080. Throughput: 0: 205.0. Samples: 2050. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2025-02-16 15:05:03,332][02958] Avg episode reward: [(0, '4.184')] |
|
[2025-02-16 15:05:04,048][15368] Decorrelating experience for 64 frames... |
|
[2025-02-16 15:05:06,193][15368] Decorrelating experience for 96 frames... |
|
[2025-02-16 15:05:06,505][02958] Heartbeat connected on RolloutWorker_w5 |
|
[2025-02-16 15:05:08,328][02958] Fps is (10 sec: 2867.0, 60 sec: 1911.4, 300 sec: 1911.4). Total num frames: 4034560. Throughput: 0: 469.3. Samples: 7040. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-16 15:05:08,333][02958] Avg episode reward: [(0, '11.475')] |
|
[2025-02-16 15:05:10,183][15362] Updated weights for policy 0, policy_version 988 (0.0133) |
|
[2025-02-16 15:05:13,329][02958] Fps is (10 sec: 4095.4, 60 sec: 2457.4, 300 sec: 2457.4). Total num frames: 4055040. Throughput: 0: 650.8. Samples: 13016. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:05:13,331][02958] Avg episode reward: [(0, '15.056')] |
|
[2025-02-16 15:05:18,327][02958] Fps is (10 sec: 4096.3, 60 sec: 2785.3, 300 sec: 2785.3). Total num frames: 4075520. Throughput: 0: 624.2. Samples: 15604. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:05:18,330][02958] Avg episode reward: [(0, '16.810')] |
|
[2025-02-16 15:05:20,786][15362] Updated weights for policy 0, policy_version 998 (0.0015) |
|
[2025-02-16 15:05:23,327][02958] Fps is (10 sec: 4096.6, 60 sec: 3003.7, 300 sec: 3003.7). Total num frames: 4096000. Throughput: 0: 750.7. Samples: 22520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:05:23,330][02958] Avg episode reward: [(0, '18.635')] |
|
[2025-02-16 15:05:28,327][02958] Fps is (10 sec: 4096.0, 60 sec: 3159.8, 300 sec: 3159.8). Total num frames: 4116480. Throughput: 0: 808.1. Samples: 28284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:05:28,338][02958] Avg episode reward: [(0, '19.517')] |
|
[2025-02-16 15:05:31,071][15362] Updated weights for policy 0, policy_version 1008 (0.0023) |
|
[2025-02-16 15:05:33,327][02958] Fps is (10 sec: 4096.0, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 4136960. Throughput: 0: 780.3. Samples: 31214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:05:33,330][02958] Avg episode reward: [(0, '20.166')] |
|
[2025-02-16 15:05:38,327][02958] Fps is (10 sec: 4505.6, 60 sec: 3458.8, 300 sec: 3458.8). Total num frames: 4161536. Throughput: 0: 852.7. Samples: 38372. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:05:38,330][02958] Avg episode reward: [(0, '20.168')] |
|
[2025-02-16 15:05:40,082][15362] Updated weights for policy 0, policy_version 1018 (0.0013) |
|
[2025-02-16 15:05:43,327][02958] Fps is (10 sec: 4096.0, 60 sec: 3440.6, 300 sec: 3440.6). Total num frames: 4177920. Throughput: 0: 942.9. Samples: 43604. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-16 15:05:43,332][02958] Avg episode reward: [(0, '22.381')] |
|
[2025-02-16 15:05:48,327][02958] Fps is (10 sec: 3686.4, 60 sec: 3500.2, 300 sec: 3500.2). Total num frames: 4198400. Throughput: 0: 993.0. Samples: 46734. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:05:48,335][02958] Avg episode reward: [(0, '22.070')] |
|
[2025-02-16 15:05:50,263][15362] Updated weights for policy 0, policy_version 1028 (0.0017) |
|
[2025-02-16 15:05:53,327][02958] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3618.1). Total num frames: 4222976. Throughput: 0: 1044.4. Samples: 54038. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-16 15:05:53,334][02958] Avg episode reward: [(0, '22.696')] |
|
[2025-02-16 15:05:58,328][02958] Fps is (10 sec: 4095.8, 60 sec: 3891.2, 300 sec: 3591.9). Total num frames: 4239360. Throughput: 0: 1032.0. Samples: 59456. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 15:05:58,330][02958] Avg episode reward: [(0, '21.906')] |
|
[2025-02-16 15:06:00,468][15362] Updated weights for policy 0, policy_version 1038 (0.0017) |
|
[2025-02-16 15:06:03,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 3686.4). Total num frames: 4263936. Throughput: 0: 1049.0. Samples: 62808. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:06:03,329][02958] Avg episode reward: [(0, '20.471')] |
|
[2025-02-16 15:06:08,327][02958] Fps is (10 sec: 4915.4, 60 sec: 4232.6, 300 sec: 3768.3). Total num frames: 4288512. Throughput: 0: 1054.8. Samples: 69984. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:06:08,329][02958] Avg episode reward: [(0, '19.329')] |
|
[2025-02-16 15:06:09,147][15362] Updated weights for policy 0, policy_version 1048 (0.0014) |
|
[2025-02-16 15:06:13,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4096.1, 300 sec: 3686.4). Total num frames: 4300800. Throughput: 0: 1041.2. Samples: 75136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:06:13,333][02958] Avg episode reward: [(0, '18.983')] |
|
[2025-02-16 15:06:18,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4164.3, 300 sec: 3758.7). Total num frames: 4325376. Throughput: 0: 1054.0. Samples: 78642. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 15:06:18,333][02958] Avg episode reward: [(0, '20.200')] |
|
[2025-02-16 15:06:19,293][15362] Updated weights for policy 0, policy_version 1058 (0.0014) |
|
[2025-02-16 15:06:23,329][02958] Fps is (10 sec: 4914.2, 60 sec: 4232.4, 300 sec: 3822.8). Total num frames: 4349952. Throughput: 0: 1058.6. Samples: 86010. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:06:23,340][02958] Avg episode reward: [(0, '21.815')] |
|
[2025-02-16 15:06:28,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 3794.2). Total num frames: 4366336. Throughput: 0: 1053.3. Samples: 91002. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:06:28,330][02958] Avg episode reward: [(0, '21.740')] |
|
[2025-02-16 15:06:28,337][15349] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001066_4366336.pth... |
|
[2025-02-16 15:06:28,483][15349] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000937_3837952.pth |
|
[2025-02-16 15:06:29,584][15362] Updated weights for policy 0, policy_version 1068 (0.0015) |
|
[2025-02-16 15:06:33,327][02958] Fps is (10 sec: 4096.9, 60 sec: 4232.5, 300 sec: 3850.2). Total num frames: 4390912. Throughput: 0: 1061.5. Samples: 94502. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:06:33,329][02958] Avg episode reward: [(0, '22.965')] |
|
[2025-02-16 15:06:38,327][02958] Fps is (10 sec: 4505.5, 60 sec: 4164.3, 300 sec: 3861.9). Total num frames: 4411392. Throughput: 0: 1057.1. Samples: 101608. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:06:38,330][02958] Avg episode reward: [(0, '23.935')] |
|
[2025-02-16 15:06:38,349][15362] Updated weights for policy 0, policy_version 1078 (0.0025) |
|
[2025-02-16 15:06:43,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4164.3, 300 sec: 3835.3). Total num frames: 4427776. Throughput: 0: 1048.5. Samples: 106636. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:06:43,330][02958] Avg episode reward: [(0, '22.944')] |
|
[2025-02-16 15:06:48,327][02958] Fps is (10 sec: 4096.1, 60 sec: 4232.5, 300 sec: 3882.3). Total num frames: 4452352. Throughput: 0: 1054.2. Samples: 110248. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:06:48,329][02958] Avg episode reward: [(0, '23.315')] |
|
[2025-02-16 15:06:48,515][15362] Updated weights for policy 0, policy_version 1088 (0.0023) |
|
[2025-02-16 15:06:53,329][02958] Fps is (10 sec: 4914.4, 60 sec: 4232.4, 300 sec: 3925.3). Total num frames: 4476928. Throughput: 0: 1055.5. Samples: 117484. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:06:53,331][02958] Avg episode reward: [(0, '21.603')] |
|
[2025-02-16 15:06:58,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.6, 300 sec: 3899.4). Total num frames: 4493312. Throughput: 0: 1054.4. Samples: 122586. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-16 15:06:58,330][02958] Avg episode reward: [(0, '21.190')] |
|
[2025-02-16 15:06:58,767][15362] Updated weights for policy 0, policy_version 1098 (0.0021) |
|
[2025-02-16 15:07:03,327][02958] Fps is (10 sec: 4096.6, 60 sec: 4232.5, 300 sec: 3938.5). Total num frames: 4517888. Throughput: 0: 1058.1. Samples: 126258. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:07:03,331][02958] Avg episode reward: [(0, '21.263')] |
|
[2025-02-16 15:07:07,170][15362] Updated weights for policy 0, policy_version 1108 (0.0017) |
|
[2025-02-16 15:07:08,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 3944.3). Total num frames: 4538368. Throughput: 0: 1053.7. Samples: 133426. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:07:08,331][02958] Avg episode reward: [(0, '22.965')] |
|
[2025-02-16 15:07:13,327][02958] Fps is (10 sec: 3686.5, 60 sec: 4232.5, 300 sec: 3920.5). Total num frames: 4554752. Throughput: 0: 1054.3. Samples: 138444. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-16 15:07:13,331][02958] Avg episode reward: [(0, '22.836')] |
|
[2025-02-16 15:07:17,604][15362] Updated weights for policy 0, policy_version 1118 (0.0018) |
|
[2025-02-16 15:07:18,327][02958] Fps is (10 sec: 4095.9, 60 sec: 4232.5, 300 sec: 3954.8). Total num frames: 4579328. Throughput: 0: 1057.7. Samples: 142098. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:07:18,332][02958] Avg episode reward: [(0, '23.248')] |
|
[2025-02-16 15:07:23,327][02958] Fps is (10 sec: 4915.2, 60 sec: 4232.7, 300 sec: 3986.8). Total num frames: 4603904. Throughput: 0: 1058.3. Samples: 149230. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:07:23,331][02958] Avg episode reward: [(0, '23.327')] |
|
[2025-02-16 15:07:27,575][15362] Updated weights for policy 0, policy_version 1128 (0.0019) |
|
[2025-02-16 15:07:28,327][02958] Fps is (10 sec: 4096.1, 60 sec: 4232.5, 300 sec: 3963.9). Total num frames: 4620288. Throughput: 0: 1064.5. Samples: 154538. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-16 15:07:28,334][02958] Avg episode reward: [(0, '23.538')] |
|
[2025-02-16 15:07:33,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 3993.6). Total num frames: 4644864. Throughput: 0: 1063.4. Samples: 158102. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:07:33,329][02958] Avg episode reward: [(0, '22.239')] |
|
[2025-02-16 15:07:36,386][15362] Updated weights for policy 0, policy_version 1138 (0.0021) |
|
[2025-02-16 15:07:38,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 3996.7). Total num frames: 4665344. Throughput: 0: 1053.9. Samples: 164908. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:07:38,337][02958] Avg episode reward: [(0, '22.090')] |
|
[2025-02-16 15:07:43,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4300.8, 300 sec: 3999.6). Total num frames: 4685824. Throughput: 0: 1061.5. Samples: 170352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:07:43,334][02958] Avg episode reward: [(0, '22.763')] |
|
[2025-02-16 15:07:46,699][15362] Updated weights for policy 0, policy_version 1148 (0.0018) |
|
[2025-02-16 15:07:48,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4300.8, 300 sec: 4025.8). Total num frames: 4710400. Throughput: 0: 1057.0. Samples: 173822. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:07:48,330][02958] Avg episode reward: [(0, '21.992')] |
|
[2025-02-16 15:07:53,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4164.4, 300 sec: 4005.0). Total num frames: 4726784. Throughput: 0: 1047.8. Samples: 180578. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-16 15:07:53,331][02958] Avg episode reward: [(0, '22.973')] |
|
[2025-02-16 15:07:57,032][15362] Updated weights for policy 0, policy_version 1158 (0.0022) |
|
[2025-02-16 15:07:58,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4007.4). Total num frames: 4747264. Throughput: 0: 1060.8. Samples: 186182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:07:58,333][02958] Avg episode reward: [(0, '22.009')] |
|
[2025-02-16 15:08:03,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4031.3). Total num frames: 4771840. Throughput: 0: 1060.7. Samples: 189828. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:08:03,329][02958] Avg episode reward: [(0, '21.456')] |
|
[2025-02-16 15:08:05,647][15362] Updated weights for policy 0, policy_version 1168 (0.0014) |
|
[2025-02-16 15:08:08,331][02958] Fps is (10 sec: 4094.3, 60 sec: 4164.0, 300 sec: 4011.9). Total num frames: 4788224. Throughput: 0: 1043.3. Samples: 196182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:08:08,336][02958] Avg episode reward: [(0, '21.240')] |
|
[2025-02-16 15:08:13,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4300.8, 300 sec: 4034.6). Total num frames: 4812800. Throughput: 0: 1057.2. Samples: 202110. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:08:13,334][02958] Avg episode reward: [(0, '22.387')] |
|
[2025-02-16 15:08:15,842][15362] Updated weights for policy 0, policy_version 1178 (0.0016) |
|
[2025-02-16 15:08:18,327][02958] Fps is (10 sec: 4507.4, 60 sec: 4232.5, 300 sec: 4036.1). Total num frames: 4833280. Throughput: 0: 1058.1. Samples: 205718. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:08:18,331][02958] Avg episode reward: [(0, '22.017')] |
|
[2025-02-16 15:08:23,329][02958] Fps is (10 sec: 4095.4, 60 sec: 4164.2, 300 sec: 4037.5). Total num frames: 4853760. Throughput: 0: 1044.9. Samples: 211928. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:08:23,333][02958] Avg episode reward: [(0, '22.528')] |
|
[2025-02-16 15:08:26,119][15362] Updated weights for policy 0, policy_version 1188 (0.0015) |
|
[2025-02-16 15:08:28,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4038.8). Total num frames: 4874240. Throughput: 0: 1059.6. Samples: 218032. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:08:28,332][02958] Avg episode reward: [(0, '23.681')] |
|
[2025-02-16 15:08:28,339][15349] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001190_4874240.pth... |
|
[2025-02-16 15:08:28,472][15349] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth |
|
[2025-02-16 15:08:33,327][02958] Fps is (10 sec: 4506.2, 60 sec: 4232.5, 300 sec: 4058.8). Total num frames: 4898816. Throughput: 0: 1061.0. Samples: 221566. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:08:33,331][02958] Avg episode reward: [(0, '22.871')] |
|
[2025-02-16 15:08:34,689][15362] Updated weights for policy 0, policy_version 1198 (0.0015) |
|
[2025-02-16 15:08:38,333][02958] Fps is (10 sec: 4093.9, 60 sec: 4163.9, 300 sec: 4041.3). Total num frames: 4915200. Throughput: 0: 1041.3. Samples: 227440. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-16 15:08:38,335][02958] Avg episode reward: [(0, '22.641')] |
|
[2025-02-16 15:08:43,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4164.3, 300 sec: 4042.6). Total num frames: 4935680. Throughput: 0: 1055.4. Samples: 233674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:08:43,330][02958] Avg episode reward: [(0, '24.075')] |
|
[2025-02-16 15:08:45,180][15362] Updated weights for policy 0, policy_version 1208 (0.0015) |
|
[2025-02-16 15:08:48,327][02958] Fps is (10 sec: 4508.0, 60 sec: 4164.3, 300 sec: 4061.1). Total num frames: 4960256. Throughput: 0: 1054.7. Samples: 237290. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 15:08:48,331][02958] Avg episode reward: [(0, '23.254')] |
|
[2025-02-16 15:08:53,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 4044.8). Total num frames: 4976640. Throughput: 0: 1044.5. Samples: 243182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:08:53,330][02958] Avg episode reward: [(0, '22.487')] |
|
[2025-02-16 15:08:55,406][15362] Updated weights for policy 0, policy_version 1218 (0.0023) |
|
[2025-02-16 15:08:58,328][02958] Fps is (10 sec: 4095.9, 60 sec: 4232.5, 300 sec: 4062.6). Total num frames: 5001216. Throughput: 0: 1054.4. Samples: 249558. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:08:58,331][02958] Avg episode reward: [(0, '22.620')] |
|
[2025-02-16 15:09:03,327][02958] Fps is (10 sec: 4915.2, 60 sec: 4232.5, 300 sec: 4079.6). Total num frames: 5025792. Throughput: 0: 1054.8. Samples: 253184. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 15:09:03,331][02958] Avg episode reward: [(0, '21.631')] |
|
[2025-02-16 15:09:04,110][15362] Updated weights for policy 0, policy_version 1228 (0.0022) |
|
[2025-02-16 15:09:08,327][02958] Fps is (10 sec: 3686.5, 60 sec: 4164.6, 300 sec: 4047.8). Total num frames: 5038080. Throughput: 0: 1040.4. Samples: 258744. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:09:08,330][02958] Avg episode reward: [(0, '21.697')] |
|
[2025-02-16 15:09:13,327][02958] Fps is (10 sec: 3686.3, 60 sec: 4164.2, 300 sec: 4064.5). Total num frames: 5062656. Throughput: 0: 1051.1. Samples: 265330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:09:13,330][02958] Avg episode reward: [(0, '21.808')] |
|
[2025-02-16 15:09:14,285][15362] Updated weights for policy 0, policy_version 1238 (0.0012) |
|
[2025-02-16 15:09:18,327][02958] Fps is (10 sec: 4915.2, 60 sec: 4232.5, 300 sec: 4080.5). Total num frames: 5087232. Throughput: 0: 1052.2. Samples: 268914. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:09:18,333][02958] Avg episode reward: [(0, '21.003')] |
|
[2025-02-16 15:09:23,327][02958] Fps is (10 sec: 4096.1, 60 sec: 4164.4, 300 sec: 4065.7). Total num frames: 5103616. Throughput: 0: 1047.1. Samples: 274554. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:09:23,329][02958] Avg episode reward: [(0, '20.474')] |
|
[2025-02-16 15:09:24,562][15362] Updated weights for policy 0, policy_version 1248 (0.0023) |
|
[2025-02-16 15:09:28,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4081.1). Total num frames: 5128192. Throughput: 0: 1056.7. Samples: 281226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:09:28,329][02958] Avg episode reward: [(0, '20.026')] |
|
[2025-02-16 15:09:33,036][15362] Updated weights for policy 0, policy_version 1258 (0.0022) |
|
[2025-02-16 15:09:33,327][02958] Fps is (10 sec: 4915.2, 60 sec: 4232.5, 300 sec: 4096.0). Total num frames: 5152768. Throughput: 0: 1058.0. Samples: 284902. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:09:33,330][02958] Avg episode reward: [(0, '21.208')] |
|
[2025-02-16 15:09:38,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4164.6, 300 sec: 4067.3). Total num frames: 5165056. Throughput: 0: 1042.9. Samples: 290114. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:09:38,331][02958] Avg episode reward: [(0, '22.070')] |
|
[2025-02-16 15:09:43,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4081.9). Total num frames: 5189632. Throughput: 0: 1052.5. Samples: 296920. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:09:43,334][02958] Avg episode reward: [(0, '23.732')] |
|
[2025-02-16 15:09:43,577][15362] Updated weights for policy 0, policy_version 1268 (0.0015) |
|
[2025-02-16 15:09:48,327][02958] Fps is (10 sec: 4915.2, 60 sec: 4232.5, 300 sec: 4096.0). Total num frames: 5214208. Throughput: 0: 1052.4. Samples: 300542. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:09:48,335][02958] Avg episode reward: [(0, '24.434')] |
|
[2025-02-16 15:09:48,345][15349] Saving new best policy, reward=24.434! |
|
[2025-02-16 15:09:53,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4151.5). Total num frames: 5230592. Throughput: 0: 1045.7. Samples: 305802. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:09:53,331][02958] Avg episode reward: [(0, '24.768')] |
|
[2025-02-16 15:09:53,335][15349] Saving new best policy, reward=24.768! |
|
[2025-02-16 15:09:54,001][15362] Updated weights for policy 0, policy_version 1278 (0.0014) |
|
[2025-02-16 15:09:58,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.6, 300 sec: 4207.1). Total num frames: 5255168. Throughput: 0: 1053.2. Samples: 312726. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:09:58,334][02958] Avg episode reward: [(0, '24.647')] |
|
[2025-02-16 15:10:02,585][15362] Updated weights for policy 0, policy_version 1288 (0.0012) |
|
[2025-02-16 15:10:03,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4207.1). Total num frames: 5275648. Throughput: 0: 1051.5. Samples: 316230. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:10:03,329][02958] Avg episode reward: [(0, '22.641')] |
|
[2025-02-16 15:10:08,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4193.2). Total num frames: 5292032. Throughput: 0: 1042.0. Samples: 321442. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 15:10:08,330][02958] Avg episode reward: [(0, '21.761')] |
|
[2025-02-16 15:10:12,848][15362] Updated weights for policy 0, policy_version 1298 (0.0019) |
|
[2025-02-16 15:10:13,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 5316608. Throughput: 0: 1051.1. Samples: 328526. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 15:10:13,330][02958] Avg episode reward: [(0, '24.330')] |
|
[2025-02-16 15:10:18,327][02958] Fps is (10 sec: 4915.2, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5341184. Throughput: 0: 1051.2. Samples: 332206. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:10:18,331][02958] Avg episode reward: [(0, '23.365')] |
|
[2025-02-16 15:10:23,095][15362] Updated weights for policy 0, policy_version 1308 (0.0015) |
|
[2025-02-16 15:10:23,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 5357568. Throughput: 0: 1047.8. Samples: 337264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:10:23,329][02958] Avg episode reward: [(0, '22.426')] |
|
[2025-02-16 15:10:28,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5382144. Throughput: 0: 1059.5. Samples: 344598. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:10:28,330][02958] Avg episode reward: [(0, '22.637')] |
|
[2025-02-16 15:10:28,343][15349] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001314_5382144.pth... |
|
[2025-02-16 15:10:28,467][15349] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001066_4366336.pth |
|
[2025-02-16 15:10:31,607][15362] Updated weights for policy 0, policy_version 1318 (0.0017) |
|
[2025-02-16 15:10:33,327][02958] Fps is (10 sec: 4505.7, 60 sec: 4164.3, 300 sec: 4207.1). Total num frames: 5402624. Throughput: 0: 1057.7. Samples: 348140. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:10:33,331][02958] Avg episode reward: [(0, '21.692')] |
|
[2025-02-16 15:10:38,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 5419008. Throughput: 0: 1048.0. Samples: 352960. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:10:38,332][02958] Avg episode reward: [(0, '21.145')] |
|
[2025-02-16 15:10:42,067][15362] Updated weights for policy 0, policy_version 1328 (0.0013) |
|
[2025-02-16 15:10:43,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5443584. Throughput: 0: 1054.2. Samples: 360164. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:10:43,333][02958] Avg episode reward: [(0, '20.727')] |
|
[2025-02-16 15:10:48,328][02958] Fps is (10 sec: 4914.8, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5468160. Throughput: 0: 1058.1. Samples: 363846. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:10:48,333][02958] Avg episode reward: [(0, '20.305')] |
|
[2025-02-16 15:10:52,457][15362] Updated weights for policy 0, policy_version 1338 (0.0023) |
|
[2025-02-16 15:10:53,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5484544. Throughput: 0: 1054.7. Samples: 368902. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:10:53,333][02958] Avg episode reward: [(0, '20.061')] |
|
[2025-02-16 15:10:58,327][02958] Fps is (10 sec: 4096.3, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5509120. Throughput: 0: 1059.9. Samples: 376220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:10:58,336][02958] Avg episode reward: [(0, '19.829')] |
|
[2025-02-16 15:11:00,797][15362] Updated weights for policy 0, policy_version 1348 (0.0014) |
|
[2025-02-16 15:11:03,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 5529600. Throughput: 0: 1058.1. Samples: 379820. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:11:03,329][02958] Avg episode reward: [(0, '20.724')] |
|
[2025-02-16 15:11:08,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5545984. Throughput: 0: 1054.5. Samples: 384718. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 15:11:08,332][02958] Avg episode reward: [(0, '22.198')] |
|
[2025-02-16 15:11:11,237][15362] Updated weights for policy 0, policy_version 1358 (0.0018) |
|
[2025-02-16 15:11:13,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5570560. Throughput: 0: 1054.4. Samples: 392044. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 15:11:13,335][02958] Avg episode reward: [(0, '22.064')] |
|
[2025-02-16 15:11:18,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4207.1). Total num frames: 5591040. Throughput: 0: 1056.4. Samples: 395676. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:11:18,332][02958] Avg episode reward: [(0, '24.350')] |
|
[2025-02-16 15:11:21,354][15362] Updated weights for policy 0, policy_version 1368 (0.0024) |
|
[2025-02-16 15:11:23,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5611520. Throughput: 0: 1062.1. Samples: 400756. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:11:23,334][02958] Avg episode reward: [(0, '24.065')] |
|
[2025-02-16 15:11:28,330][02958] Fps is (10 sec: 4504.3, 60 sec: 4232.3, 300 sec: 4220.9). Total num frames: 5636096. Throughput: 0: 1065.0. Samples: 408094. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 15:11:28,336][02958] Avg episode reward: [(0, '23.603')] |
|
[2025-02-16 15:11:29,923][15362] Updated weights for policy 0, policy_version 1378 (0.0021) |
|
[2025-02-16 15:11:33,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5656576. Throughput: 0: 1063.0. Samples: 411682. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-16 15:11:33,334][02958] Avg episode reward: [(0, '22.743')] |
|
[2025-02-16 15:11:38,327][02958] Fps is (10 sec: 3687.4, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5672960. Throughput: 0: 1059.2. Samples: 416566. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-16 15:11:38,332][02958] Avg episode reward: [(0, '22.674')] |
|
[2025-02-16 15:11:40,237][15362] Updated weights for policy 0, policy_version 1388 (0.0019) |
|
[2025-02-16 15:11:43,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5697536. Throughput: 0: 1058.4. Samples: 423850. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 15:11:43,332][02958] Avg episode reward: [(0, '22.770')] |
|
[2025-02-16 15:11:48,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4207.1). Total num frames: 5718016. Throughput: 0: 1059.7. Samples: 427506. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-16 15:11:48,335][02958] Avg episode reward: [(0, '22.730')] |
|
[2025-02-16 15:11:50,514][15362] Updated weights for policy 0, policy_version 1398 (0.0034) |
|
[2025-02-16 15:11:53,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5738496. Throughput: 0: 1063.9. Samples: 432592. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:11:53,334][02958] Avg episode reward: [(0, '22.141')] |
|
[2025-02-16 15:11:58,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5763072. Throughput: 0: 1062.2. Samples: 439842. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:11:58,330][02958] Avg episode reward: [(0, '22.303')] |
|
[2025-02-16 15:11:58,919][15362] Updated weights for policy 0, policy_version 1408 (0.0014) |
|
[2025-02-16 15:12:03,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5783552. Throughput: 0: 1058.0. Samples: 443288. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:12:03,332][02958] Avg episode reward: [(0, '23.438')] |
|
[2025-02-16 15:12:08,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5799936. Throughput: 0: 1057.8. Samples: 448358. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:12:08,329][02958] Avg episode reward: [(0, '22.328')] |
|
[2025-02-16 15:12:09,394][15362] Updated weights for policy 0, policy_version 1418 (0.0021) |
|
[2025-02-16 15:12:13,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5824512. Throughput: 0: 1056.2. Samples: 455620. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-16 15:12:13,335][02958] Avg episode reward: [(0, '22.973')] |
|
[2025-02-16 15:12:18,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 5844992. Throughput: 0: 1050.5. Samples: 458954. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:12:18,330][02958] Avg episode reward: [(0, '22.652')] |
|
[2025-02-16 15:12:19,437][15362] Updated weights for policy 0, policy_version 1428 (0.0018) |
|
[2025-02-16 15:12:23,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5865472. Throughput: 0: 1062.8. Samples: 464390. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:12:23,329][02958] Avg episode reward: [(0, '22.880')] |
|
[2025-02-16 15:12:27,909][15362] Updated weights for policy 0, policy_version 1438 (0.0012) |
|
[2025-02-16 15:12:28,327][02958] Fps is (10 sec: 4505.6, 60 sec: 4232.7, 300 sec: 4221.0). Total num frames: 5890048. Throughput: 0: 1065.2. Samples: 471782. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:12:28,333][02958] Avg episode reward: [(0, '20.641')] |
|
[2025-02-16 15:12:28,347][15349] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001438_5890048.pth... |
|
[2025-02-16 15:12:28,490][15349] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001190_4874240.pth |
|
[2025-02-16 15:12:33,329][02958] Fps is (10 sec: 4095.5, 60 sec: 4164.2, 300 sec: 4207.1). Total num frames: 5906432. Throughput: 0: 1050.1. Samples: 474760. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-16 15:12:33,333][02958] Avg episode reward: [(0, '21.943')] |
|
[2025-02-16 15:12:38,327][02958] Fps is (10 sec: 3686.4, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 5926912. Throughput: 0: 1057.4. Samples: 480174. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:12:38,332][02958] Avg episode reward: [(0, '21.466')] |
|
[2025-02-16 15:12:38,406][15362] Updated weights for policy 0, policy_version 1448 (0.0031) |
|
[2025-02-16 15:12:43,327][02958] Fps is (10 sec: 4506.2, 60 sec: 4232.5, 300 sec: 4207.1). Total num frames: 5951488. Throughput: 0: 1059.6. Samples: 487526. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-16 15:12:43,331][02958] Avg episode reward: [(0, '21.948')] |
|
[2025-02-16 15:12:48,327][02958] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 4207.1). Total num frames: 5967872. Throughput: 0: 1045.6. Samples: 490338. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-16 15:12:48,334][02958] Avg episode reward: [(0, '21.896')] |
|
[2025-02-16 15:12:48,401][15362] Updated weights for policy 0, policy_version 1458 (0.0016) |
|
[2025-02-16 15:12:53,328][02958] Fps is (10 sec: 4095.9, 60 sec: 4232.5, 300 sec: 4221.0). Total num frames: 5992448. Throughput: 0: 1063.6. Samples: 496220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-16 15:12:53,335][02958] Avg episode reward: [(0, '22.926')] |
|
[2025-02-16 15:12:55,342][15349] Stopping Batcher_0... |
|
[2025-02-16 15:12:55,342][15349] Loop batcher_evt_loop terminating... |
|
[2025-02-16 15:12:55,342][02958] Component Batcher_0 stopped! |
|
[2025-02-16 15:12:55,349][15349] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth... |
|
[2025-02-16 15:12:55,407][15362] Weights refcount: 2 0 |
|
[2025-02-16 15:12:55,413][02958] Component InferenceWorker_p0-w0 stopped! |
|
[2025-02-16 15:12:55,418][15362] Stopping InferenceWorker_p0-w0... |
|
[2025-02-16 15:12:55,419][15362] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-02-16 15:12:55,469][15349] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001314_5382144.pth |
|
[2025-02-16 15:12:55,480][15349] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth... |
|
[2025-02-16 15:12:55,654][15349] Stopping LearnerWorker_p0... |
|
[2025-02-16 15:12:55,654][15349] Loop learner_proc0_evt_loop terminating... |
|
[2025-02-16 15:12:55,654][02958] Component LearnerWorker_p0 stopped! |
|
[2025-02-16 15:12:55,805][15365] Stopping RolloutWorker_w2... |
|
[2025-02-16 15:12:55,806][15365] Loop rollout_proc2_evt_loop terminating... |
|
[2025-02-16 15:12:55,805][02958] Component RolloutWorker_w2 stopped! |
|
[2025-02-16 15:12:55,813][02958] Component RolloutWorker_w3 stopped! |
|
[2025-02-16 15:12:55,821][15366] Stopping RolloutWorker_w3... |
|
[2025-02-16 15:12:55,822][15366] Loop rollout_proc3_evt_loop terminating... |
|
[2025-02-16 15:12:55,838][02958] Component RolloutWorker_w7 stopped! |
|
[2025-02-16 15:12:55,842][15370] Stopping RolloutWorker_w7... |
|
[2025-02-16 15:12:55,843][15370] Loop rollout_proc7_evt_loop terminating... |
|
[2025-02-16 15:12:55,876][15363] Stopping RolloutWorker_w0... |
|
[2025-02-16 15:12:55,875][02958] Component RolloutWorker_w0 stopped! |
|
[2025-02-16 15:12:55,879][15363] Loop rollout_proc0_evt_loop terminating... |
|
[2025-02-16 15:12:55,893][15369] Stopping RolloutWorker_w6... |
|
[2025-02-16 15:12:55,895][15367] Stopping RolloutWorker_w4... |
|
[2025-02-16 15:12:55,893][02958] Component RolloutWorker_w6 stopped! |
|
[2025-02-16 15:12:55,893][15364] Stopping RolloutWorker_w1... |
|
[2025-02-16 15:12:55,897][02958] Component RolloutWorker_w1 stopped! |
|
[2025-02-16 15:12:55,898][15369] Loop rollout_proc6_evt_loop terminating... |
|
[2025-02-16 15:12:55,898][15367] Loop rollout_proc4_evt_loop terminating... |
|
[2025-02-16 15:12:55,900][02958] Component RolloutWorker_w4 stopped! |
|
[2025-02-16 15:12:55,902][15368] Stopping RolloutWorker_w5... |
|
[2025-02-16 15:12:55,904][02958] Component RolloutWorker_w5 stopped! |
|
[2025-02-16 15:12:55,909][02958] Waiting for process learner_proc0 to stop... |
|
[2025-02-16 15:12:55,898][15364] Loop rollout_proc1_evt_loop terminating... |
|
[2025-02-16 15:12:55,916][15368] Loop rollout_proc5_evt_loop terminating... |
|
[2025-02-16 15:12:57,320][02958] Waiting for process inference_proc0-0 to join... |
|
[2025-02-16 15:12:57,328][02958] Waiting for process rollout_proc0 to join... |
|
[2025-02-16 15:12:59,889][02958] Waiting for process rollout_proc1 to join... |
|
[2025-02-16 15:12:59,891][02958] Waiting for process rollout_proc2 to join... |
|
[2025-02-16 15:12:59,892][02958] Waiting for process rollout_proc3 to join... |
|
[2025-02-16 15:12:59,894][02958] Waiting for process rollout_proc4 to join... |
|
[2025-02-16 15:12:59,895][02958] Waiting for process rollout_proc5 to join... |
|
[2025-02-16 15:12:59,898][02958] Waiting for process rollout_proc6 to join... |
|
[2025-02-16 15:12:59,899][02958] Waiting for process rollout_proc7 to join... |
|
[2025-02-16 15:12:59,903][02958] Batcher 0 profile tree view: |
|
batching: 12.3884, releasing_batches: 0.0155 |
|
[2025-02-16 15:12:59,904][02958] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 191.2322 |
|
update_model: 3.9100 |
|
weight_update: 0.0024 |
|
one_step: 0.0058 |
|
handle_policy_step: 273.8398 |
|
deserialize: 6.5623, stack: 1.4737, obs_to_device_normalize: 58.4891, forward: 139.5654, send_messages: 13.3637 |
|
prepare_outputs: 42.5374 |
|
to_cpu: 26.5358 |
|
[2025-02-16 15:12:59,905][02958] Learner 0 profile tree view: |
|
misc: 0.0020, prepare_batch: 7.1604 |
|
train: 37.2971 |
|
epoch_init: 0.0046, minibatch_init: 0.0119, losses_postprocess: 0.3260, kl_divergence: 0.3608, after_optimizer: 1.5747 |
|
calculate_losses: 12.5370 |
|
losses_init: 0.0016, forward_head: 0.8822, bptt_initial: 8.0996, tail: 0.5586, advantages_returns: 0.1494, losses: 1.7848 |
|
bptt: 0.9527 |
|
bptt_forward_core: 0.9057 |
|
update: 22.2356 |
|
clip: 0.4276 |
|
[2025-02-16 15:12:59,906][02958] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.1236, enqueue_policy_requests: 42.1775, env_step: 381.0958, overhead: 5.2686, complete_rollouts: 3.5399 |
|
save_policy_outputs: 8.0638 |
|
split_output_tensors: 2.9330 |
|
[2025-02-16 15:12:59,908][02958] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.1445, enqueue_policy_requests: 43.5245, env_step: 383.1037, overhead: 5.3685, complete_rollouts: 2.9886 |
|
save_policy_outputs: 8.6194 |
|
split_output_tensors: 3.2427 |
|
[2025-02-16 15:12:59,909][02958] Loop Runner_EvtLoop terminating... |
|
[2025-02-16 15:12:59,911][02958] Runner profile tree view: |
|
main_loop: 508.4037 |
|
[2025-02-16 15:12:59,912][02958] Collected {0: 6004736}, FPS: 3931.6 |
|
[2025-02-16 15:18:10,123][02958] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-16 15:18:10,125][02958] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-02-16 15:18:10,127][02958] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-02-16 15:18:10,129][02958] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-02-16 15:18:10,131][02958] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-16 15:18:10,132][02958] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-02-16 15:18:10,134][02958] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-02-16 15:18:10,137][02958] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-02-16 15:18:10,137][02958] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-02-16 15:18:10,139][02958] Adding new argument 'hf_repository'='AndiB93/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-02-16 15:18:10,140][02958] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-02-16 15:18:10,141][02958] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-02-16 15:18:10,142][02958] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-02-16 15:18:10,143][02958] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-02-16 15:18:10,144][02958] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-02-16 15:18:10,179][02958] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-16 15:18:10,181][02958] RunningMeanStd input shape: (1,) |
|
[2025-02-16 15:18:10,192][02958] ConvEncoder: input_channels=3 |
|
[2025-02-16 15:18:10,227][02958] Conv encoder output size: 512 |
|
[2025-02-16 15:18:10,228][02958] Policy head output size: 512 |
|
[2025-02-16 15:18:10,257][02958] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth... |
|
[2025-02-16 15:18:10,667][02958] Num frames 100... |
|
[2025-02-16 15:18:10,793][02958] Num frames 200... |
|
[2025-02-16 15:18:10,929][02958] Num frames 300... |
|
[2025-02-16 15:18:11,060][02958] Num frames 400... |
|
[2025-02-16 15:18:11,211][02958] Num frames 500... |
|
[2025-02-16 15:18:11,351][02958] Num frames 600... |
|
[2025-02-16 15:18:11,478][02958] Num frames 700... |
|
[2025-02-16 15:18:11,608][02958] Num frames 800... |
|
[2025-02-16 15:18:11,735][02958] Num frames 900... |
|
[2025-02-16 15:18:11,865][02958] Num frames 1000... |
|
[2025-02-16 15:18:11,995][02958] Num frames 1100... |
|
[2025-02-16 15:18:12,133][02958] Num frames 1200... |
|
[2025-02-16 15:18:12,266][02958] Num frames 1300... |
|
[2025-02-16 15:18:12,385][02958] Avg episode rewards: #0: 32.440, true rewards: #0: 13.440 |
|
[2025-02-16 15:18:12,387][02958] Avg episode reward: 32.440, avg true_objective: 13.440 |
|
[2025-02-16 15:18:12,460][02958] Num frames 1400... |
|
[2025-02-16 15:18:12,591][02958] Num frames 1500... |
|
[2025-02-16 15:18:12,719][02958] Num frames 1600... |
|
[2025-02-16 15:18:12,850][02958] Num frames 1700... |
|
[2025-02-16 15:18:12,979][02958] Num frames 1800... |
|
[2025-02-16 15:18:13,117][02958] Num frames 1900... |
|
[2025-02-16 15:18:13,247][02958] Num frames 2000... |
|
[2025-02-16 15:18:13,382][02958] Num frames 2100... |
|
[2025-02-16 15:18:13,515][02958] Num frames 2200... |
|
[2025-02-16 15:18:13,643][02958] Num frames 2300... |
|
[2025-02-16 15:18:13,774][02958] Num frames 2400... |
|
[2025-02-16 15:18:13,912][02958] Avg episode rewards: #0: 28.820, true rewards: #0: 12.320 |
|
[2025-02-16 15:18:13,914][02958] Avg episode reward: 28.820, avg true_objective: 12.320 |
|
[2025-02-16 15:18:13,966][02958] Num frames 2500... |
|
[2025-02-16 15:18:14,100][02958] Num frames 2600... |
|
[2025-02-16 15:18:14,235][02958] Num frames 2700... |
|
[2025-02-16 15:18:14,375][02958] Num frames 2800... |
|
[2025-02-16 15:18:14,506][02958] Num frames 2900... |
|
[2025-02-16 15:18:14,641][02958] Num frames 3000... |
|
[2025-02-16 15:18:14,769][02958] Num frames 3100... |
|
[2025-02-16 15:18:14,901][02958] Num frames 3200... |
|
[2025-02-16 15:18:15,030][02958] Num frames 3300... |
|
[2025-02-16 15:18:15,166][02958] Num frames 3400... |
|
[2025-02-16 15:18:15,298][02958] Num frames 3500... |
|
[2025-02-16 15:18:15,436][02958] Num frames 3600... |
|
[2025-02-16 15:18:15,563][02958] Num frames 3700... |
|
[2025-02-16 15:18:15,693][02958] Num frames 3800... |
|
[2025-02-16 15:18:15,824][02958] Num frames 3900... |
|
[2025-02-16 15:18:15,952][02958] Num frames 4000... |
|
[2025-02-16 15:18:16,086][02958] Avg episode rewards: #0: 31.533, true rewards: #0: 13.533 |
|
[2025-02-16 15:18:16,088][02958] Avg episode reward: 31.533, avg true_objective: 13.533 |
|
[2025-02-16 15:18:16,145][02958] Num frames 4100... |
|
[2025-02-16 15:18:16,275][02958] Num frames 4200... |
|
[2025-02-16 15:18:16,424][02958] Num frames 4300... |
|
[2025-02-16 15:18:16,603][02958] Num frames 4400... |
|
[2025-02-16 15:18:16,779][02958] Num frames 4500... |
|
[2025-02-16 15:18:16,948][02958] Num frames 4600... |
|
[2025-02-16 15:18:17,127][02958] Num frames 4700... |
|
[2025-02-16 15:18:17,181][02958] Avg episode rewards: #0: 27.000, true rewards: #0: 11.750 |
|
[2025-02-16 15:18:17,183][02958] Avg episode reward: 27.000, avg true_objective: 11.750 |
|
[2025-02-16 15:18:17,350][02958] Num frames 4800... |
|
[2025-02-16 15:18:17,524][02958] Num frames 4900... |
|
[2025-02-16 15:18:17,691][02958] Num frames 5000... |
|
[2025-02-16 15:18:17,868][02958] Num frames 5100... |
|
[2025-02-16 15:18:18,041][02958] Num frames 5200... |
|
[2025-02-16 15:18:18,236][02958] Num frames 5300... |
|
[2025-02-16 15:18:18,439][02958] Num frames 5400... |
|
[2025-02-16 15:18:18,602][02958] Num frames 5500... |
|
[2025-02-16 15:18:18,735][02958] Num frames 5600... |
|
[2025-02-16 15:18:18,861][02958] Num frames 5700... |
|
[2025-02-16 15:18:19,027][02958] Avg episode rewards: #0: 26.176, true rewards: #0: 11.576 |
|
[2025-02-16 15:18:19,029][02958] Avg episode reward: 26.176, avg true_objective: 11.576 |
|
[2025-02-16 15:18:19,048][02958] Num frames 5800... |
|
[2025-02-16 15:18:19,187][02958] Num frames 5900... |
|
[2025-02-16 15:18:19,315][02958] Num frames 6000... |
|
[2025-02-16 15:18:19,445][02958] Num frames 6100... |
|
[2025-02-16 15:18:19,582][02958] Num frames 6200... |
|
[2025-02-16 15:18:19,711][02958] Num frames 6300... |
|
[2025-02-16 15:18:19,843][02958] Num frames 6400... |
|
[2025-02-16 15:18:19,970][02958] Num frames 6500... |
|
[2025-02-16 15:18:20,105][02958] Num frames 6600... |
|
[2025-02-16 15:18:20,232][02958] Num frames 6700... |
|
[2025-02-16 15:18:20,359][02958] Num frames 6800... |
|
[2025-02-16 15:18:20,491][02958] Num frames 6900... |
|
[2025-02-16 15:18:20,601][02958] Avg episode rewards: #0: 25.567, true rewards: #0: 11.567 |
|
[2025-02-16 15:18:20,603][02958] Avg episode reward: 25.567, avg true_objective: 11.567 |
|
[2025-02-16 15:18:20,682][02958] Num frames 7000... |
|
[2025-02-16 15:18:20,813][02958] Num frames 7100... |
|
[2025-02-16 15:18:20,940][02958] Num frames 7200... |
|
[2025-02-16 15:18:21,073][02958] Num frames 7300... |
|
[2025-02-16 15:18:21,209][02958] Num frames 7400... |
|
[2025-02-16 15:18:21,341][02958] Num frames 7500... |
|
[2025-02-16 15:18:21,471][02958] Num frames 7600... |
|
[2025-02-16 15:18:21,606][02958] Num frames 7700... |
|
[2025-02-16 15:18:21,739][02958] Num frames 7800... |
|
[2025-02-16 15:18:21,868][02958] Num frames 7900... |
|
[2025-02-16 15:18:21,999][02958] Num frames 8000... |
|
[2025-02-16 15:18:22,136][02958] Num frames 8100... |
|
[2025-02-16 15:18:22,264][02958] Avg episode rewards: #0: 25.509, true rewards: #0: 11.651 |
|
[2025-02-16 15:18:22,265][02958] Avg episode reward: 25.509, avg true_objective: 11.651 |
|
[2025-02-16 15:18:22,326][02958] Num frames 8200... |
|
[2025-02-16 15:18:22,463][02958] Num frames 8300... |
|
[2025-02-16 15:18:22,611][02958] Num frames 8400... |
|
[2025-02-16 15:18:22,750][02958] Num frames 8500... |
|
[2025-02-16 15:18:22,880][02958] Num frames 8600... |
|
[2025-02-16 15:18:23,010][02958] Num frames 8700... |
|
[2025-02-16 15:18:23,149][02958] Num frames 8800... |
|
[2025-02-16 15:18:23,282][02958] Num frames 8900... |
|
[2025-02-16 15:18:23,410][02958] Num frames 9000... |
|
[2025-02-16 15:18:23,541][02958] Num frames 9100... |
|
[2025-02-16 15:18:23,618][02958] Avg episode rewards: #0: 24.895, true rewards: #0: 11.395 |
|
[2025-02-16 15:18:23,619][02958] Avg episode reward: 24.895, avg true_objective: 11.395 |
|
[2025-02-16 15:18:23,739][02958] Num frames 9200... |
|
[2025-02-16 15:18:23,868][02958] Num frames 9300... |
|
[2025-02-16 15:18:23,998][02958] Num frames 9400... |
|
[2025-02-16 15:18:24,134][02958] Num frames 9500... |
|
[2025-02-16 15:18:24,265][02958] Num frames 9600... |
|
[2025-02-16 15:18:24,396][02958] Num frames 9700... |
|
[2025-02-16 15:18:24,527][02958] Num frames 9800... |
|
[2025-02-16 15:18:24,661][02958] Num frames 9900... |
|
[2025-02-16 15:18:24,813][02958] Num frames 10000... |
|
[2025-02-16 15:18:24,945][02958] Num frames 10100... |
|
[2025-02-16 15:18:25,073][02958] Num frames 10200... |
|
[2025-02-16 15:18:25,137][02958] Avg episode rewards: #0: 24.671, true rewards: #0: 11.338 |
|
[2025-02-16 15:18:25,139][02958] Avg episode reward: 24.671, avg true_objective: 11.338 |
|
[2025-02-16 15:18:25,260][02958] Num frames 10300... |
|
[2025-02-16 15:18:25,389][02958] Num frames 10400... |
|
[2025-02-16 15:18:25,520][02958] Num frames 10500... |
|
[2025-02-16 15:18:25,650][02958] Num frames 10600... |
|
[2025-02-16 15:18:25,788][02958] Num frames 10700... |
|
[2025-02-16 15:18:25,920][02958] Num frames 10800... |
|
[2025-02-16 15:18:26,047][02958] Num frames 10900... |
|
[2025-02-16 15:18:26,183][02958] Num frames 11000... |
|
[2025-02-16 15:18:26,312][02958] Num frames 11100... |
|
[2025-02-16 15:18:26,447][02958] Num frames 11200... |
|
[2025-02-16 15:18:26,576][02958] Num frames 11300... |
|
[2025-02-16 15:18:26,708][02958] Num frames 11400... |
|
[2025-02-16 15:18:26,843][02958] Num frames 11500... |
|
[2025-02-16 15:18:26,973][02958] Num frames 11600... |
|
[2025-02-16 15:18:27,125][02958] Avg episode rewards: #0: 25.476, true rewards: #0: 11.676 |
|
[2025-02-16 15:18:27,127][02958] Avg episode reward: 25.476, avg true_objective: 11.676 |
|
[2025-02-16 15:19:31,140][02958] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|