zhngq's picture
Upload folder using huggingface_hub
eee2f09 verified
[2025-07-07 11:03:24,441][04410] Saving configuration to /content/train_dir/default_experiment/config.json...
[2025-07-07 11:03:24,444][04410] Rollout worker 0 uses device cpu
[2025-07-07 11:03:24,445][04410] Rollout worker 1 uses device cpu
[2025-07-07 11:03:24,446][04410] Rollout worker 2 uses device cpu
[2025-07-07 11:03:24,447][04410] Rollout worker 3 uses device cpu
[2025-07-07 11:03:24,448][04410] Rollout worker 4 uses device cpu
[2025-07-07 11:03:24,450][04410] Rollout worker 5 uses device cpu
[2025-07-07 11:03:24,452][04410] Rollout worker 6 uses device cpu
[2025-07-07 11:03:24,453][04410] Rollout worker 7 uses device cpu
[2025-07-07 11:03:24,596][04410] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-07-07 11:03:24,597][04410] InferenceWorker_p0-w0: min num requests: 2
[2025-07-07 11:03:24,626][04410] Starting all processes...
[2025-07-07 11:03:24,626][04410] Starting process learner_proc0
[2025-07-07 11:03:24,678][04410] Starting all processes...
[2025-07-07 11:03:24,684][04410] Starting process inference_proc0-0
[2025-07-07 11:03:24,687][04410] Starting process rollout_proc2
[2025-07-07 11:03:24,687][04410] Starting process rollout_proc1
[2025-07-07 11:03:24,688][04410] Starting process rollout_proc3
[2025-07-07 11:03:24,689][04410] Starting process rollout_proc4
[2025-07-07 11:03:24,690][04410] Starting process rollout_proc5
[2025-07-07 11:03:24,690][04410] Starting process rollout_proc6
[2025-07-07 11:03:24,690][04410] Starting process rollout_proc7
[2025-07-07 11:03:24,687][04410] Starting process rollout_proc0
[2025-07-07 11:03:40,130][04851] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-07-07 11:03:40,134][04851] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2025-07-07 11:03:40,201][04851] Num visible devices: 1
[2025-07-07 11:03:40,222][04851] Starting seed is not provided
[2025-07-07 11:03:40,223][04851] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-07-07 11:03:40,223][04851] Initializing actor-critic model on device cuda:0
[2025-07-07 11:03:40,224][04851] RunningMeanStd input shape: (3, 72, 128)
[2025-07-07 11:03:40,228][04851] RunningMeanStd input shape: (1,)
[2025-07-07 11:03:40,295][04851] ConvEncoder: input_channels=3
[2025-07-07 11:03:41,140][04864] Worker 2 uses CPU cores [0]
[2025-07-07 11:03:41,186][04869] Worker 4 uses CPU cores [0]
[2025-07-07 11:03:41,188][04872] Worker 0 uses CPU cores [0]
[2025-07-07 11:03:41,205][04870] Worker 6 uses CPU cores [0]
[2025-07-07 11:03:41,206][04867] Worker 1 uses CPU cores [1]
[2025-07-07 11:03:41,240][04851] Conv encoder output size: 512
[2025-07-07 11:03:41,240][04851] Policy head output size: 512
[2025-07-07 11:03:41,246][04866] Worker 3 uses CPU cores [1]
[2025-07-07 11:03:41,303][04868] Worker 5 uses CPU cores [1]
[2025-07-07 11:03:41,315][04865] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-07-07 11:03:41,316][04865] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2025-07-07 11:03:41,327][04851] Created Actor Critic model with architecture:
[2025-07-07 11:03:41,327][04851] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2025-07-07 11:03:41,342][04865] Num visible devices: 1
[2025-07-07 11:03:41,390][04871] Worker 7 uses CPU cores [1]
[2025-07-07 11:03:41,682][04851] Using optimizer <class 'torch.optim.adam.Adam'>
[2025-07-07 11:03:44,589][04410] Heartbeat connected on Batcher_0
[2025-07-07 11:03:44,597][04410] Heartbeat connected on InferenceWorker_p0-w0
[2025-07-07 11:03:44,603][04410] Heartbeat connected on RolloutWorker_w0
[2025-07-07 11:03:44,610][04410] Heartbeat connected on RolloutWorker_w2
[2025-07-07 11:03:44,614][04410] Heartbeat connected on RolloutWorker_w1
[2025-07-07 11:03:44,617][04410] Heartbeat connected on RolloutWorker_w3
[2025-07-07 11:03:44,618][04410] Heartbeat connected on RolloutWorker_w4
[2025-07-07 11:03:44,620][04410] Heartbeat connected on RolloutWorker_w5
[2025-07-07 11:03:44,622][04410] Heartbeat connected on RolloutWorker_w6
[2025-07-07 11:03:44,628][04410] Heartbeat connected on RolloutWorker_w7
[2025-07-07 11:03:46,951][04851] No checkpoints found
[2025-07-07 11:03:46,951][04851] Did not load from checkpoint, starting from scratch!
[2025-07-07 11:03:46,952][04851] Initialized policy 0 weights for model version 0
[2025-07-07 11:03:46,956][04851] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-07-07 11:03:46,967][04851] LearnerWorker_p0 finished initialization!
[2025-07-07 11:03:46,972][04410] Heartbeat connected on LearnerWorker_p0
[2025-07-07 11:03:47,161][04865] RunningMeanStd input shape: (3, 72, 128)
[2025-07-07 11:03:47,163][04865] RunningMeanStd input shape: (1,)
[2025-07-07 11:03:47,176][04865] ConvEncoder: input_channels=3
[2025-07-07 11:03:47,292][04865] Conv encoder output size: 512
[2025-07-07 11:03:47,293][04865] Policy head output size: 512
[2025-07-07 11:03:47,327][04410] Inference worker 0-0 is ready!
[2025-07-07 11:03:47,328][04410] All inference workers are ready! Signal rollout workers to start!
[2025-07-07 11:03:47,520][04864] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-07-07 11:03:47,536][04869] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-07-07 11:03:47,581][04868] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-07-07 11:03:47,582][04871] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-07-07 11:03:47,585][04867] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-07-07 11:03:47,591][04866] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-07-07 11:03:47,591][04872] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-07-07 11:03:47,652][04870] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-07-07 11:03:48,661][04864] Decorrelating experience for 0 frames...
[2025-07-07 11:03:48,664][04871] Decorrelating experience for 0 frames...
[2025-07-07 11:03:48,661][04868] Decorrelating experience for 0 frames...
[2025-07-07 11:03:49,031][04864] Decorrelating experience for 32 frames...
[2025-07-07 11:03:49,116][04410] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2025-07-07 11:03:49,312][04868] Decorrelating experience for 32 frames...
[2025-07-07 11:03:49,314][04871] Decorrelating experience for 32 frames...
[2025-07-07 11:03:50,061][04872] Decorrelating experience for 0 frames...
[2025-07-07 11:03:50,099][04864] Decorrelating experience for 64 frames...
[2025-07-07 11:03:50,319][04868] Decorrelating experience for 64 frames...
[2025-07-07 11:03:50,330][04871] Decorrelating experience for 64 frames...
[2025-07-07 11:03:50,859][04872] Decorrelating experience for 32 frames...
[2025-07-07 11:03:50,967][04864] Decorrelating experience for 96 frames...
[2025-07-07 11:03:51,191][04866] Decorrelating experience for 0 frames...
[2025-07-07 11:03:51,534][04871] Decorrelating experience for 96 frames...
[2025-07-07 11:03:51,536][04868] Decorrelating experience for 96 frames...
[2025-07-07 11:03:51,986][04866] Decorrelating experience for 32 frames...
[2025-07-07 11:03:51,990][04872] Decorrelating experience for 64 frames...
[2025-07-07 11:03:52,672][04872] Decorrelating experience for 96 frames...
[2025-07-07 11:03:53,488][04866] Decorrelating experience for 64 frames...
[2025-07-07 11:03:54,118][04410] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 172.8. Samples: 864. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2025-07-07 11:03:54,119][04410] Avg episode reward: [(0, '2.468')]
[2025-07-07 11:03:55,974][04851] Signal inference workers to stop experience collection...
[2025-07-07 11:03:56,003][04865] InferenceWorker_p0-w0: stopping experience collection
[2025-07-07 11:03:56,068][04866] Decorrelating experience for 96 frames...
[2025-07-07 11:03:57,339][04851] Signal inference workers to resume experience collection...
[2025-07-07 11:03:57,346][04865] InferenceWorker_p0-w0: resuming experience collection
[2025-07-07 11:03:59,116][04410] Fps is (10 sec: 1228.8, 60 sec: 1228.8, 300 sec: 1228.8). Total num frames: 12288. Throughput: 0: 224.8. Samples: 2248. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0)
[2025-07-07 11:03:59,120][04410] Avg episode reward: [(0, '3.433')]
[2025-07-07 11:04:04,116][04410] Fps is (10 sec: 3277.2, 60 sec: 2184.5, 300 sec: 2184.5). Total num frames: 32768. Throughput: 0: 482.7. Samples: 7240. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:04:04,119][04410] Avg episode reward: [(0, '3.921')]
[2025-07-07 11:04:05,401][04865] Updated weights for policy 0, policy_version 10 (0.0135)
[2025-07-07 11:04:09,116][04410] Fps is (10 sec: 4096.0, 60 sec: 2662.4, 300 sec: 2662.4). Total num frames: 53248. Throughput: 0: 695.7. Samples: 13914. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:04:09,119][04410] Avg episode reward: [(0, '4.274')]
[2025-07-07 11:04:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 2785.3, 300 sec: 2785.3). Total num frames: 69632. Throughput: 0: 640.5. Samples: 16012. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:04:14,118][04410] Avg episode reward: [(0, '4.372')]
[2025-07-07 11:04:16,046][04865] Updated weights for policy 0, policy_version 20 (0.0017)
[2025-07-07 11:04:19,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3003.7, 300 sec: 3003.7). Total num frames: 90112. Throughput: 0: 726.8. Samples: 21804. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:04:19,118][04410] Avg episode reward: [(0, '4.565')]
[2025-07-07 11:04:24,116][04410] Fps is (10 sec: 4095.9, 60 sec: 3159.8, 300 sec: 3159.8). Total num frames: 110592. Throughput: 0: 800.6. Samples: 28022. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:04:24,118][04410] Avg episode reward: [(0, '4.546')]
[2025-07-07 11:04:24,123][04851] Saving new best policy, reward=4.546!
[2025-07-07 11:04:27,128][04865] Updated weights for policy 0, policy_version 30 (0.0018)
[2025-07-07 11:04:29,117][04410] Fps is (10 sec: 4096.0, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 131072. Throughput: 0: 753.0. Samples: 30120. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:04:29,118][04410] Avg episode reward: [(0, '4.289')]
[2025-07-07 11:04:34,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3367.8, 300 sec: 3367.8). Total num frames: 151552. Throughput: 0: 818.8. Samples: 36846. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:04:34,118][04410] Avg episode reward: [(0, '4.362')]
[2025-07-07 11:04:36,401][04865] Updated weights for policy 0, policy_version 40 (0.0014)
[2025-07-07 11:04:39,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3358.7, 300 sec: 3358.7). Total num frames: 167936. Throughput: 0: 928.0. Samples: 42624. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:04:39,121][04410] Avg episode reward: [(0, '4.398')]
[2025-07-07 11:04:44,117][04410] Fps is (10 sec: 4095.9, 60 sec: 3500.2, 300 sec: 3500.2). Total num frames: 192512. Throughput: 0: 956.8. Samples: 45306. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:04:44,120][04410] Avg episode reward: [(0, '4.342')]
[2025-07-07 11:04:46,999][04865] Updated weights for policy 0, policy_version 50 (0.0013)
[2025-07-07 11:04:49,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3549.9). Total num frames: 212992. Throughput: 0: 992.2. Samples: 51888. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:04:49,118][04410] Avg episode reward: [(0, '4.337')]
[2025-07-07 11:04:54,116][04410] Fps is (10 sec: 3686.5, 60 sec: 3823.0, 300 sec: 3528.9). Total num frames: 229376. Throughput: 0: 961.8. Samples: 57196. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:04:54,118][04410] Avg episode reward: [(0, '4.490')]
[2025-07-07 11:04:57,494][04865] Updated weights for policy 0, policy_version 60 (0.0013)
[2025-07-07 11:04:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3569.4). Total num frames: 249856. Throughput: 0: 986.0. Samples: 60384. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:04:59,121][04410] Avg episode reward: [(0, '4.597')]
[2025-07-07 11:04:59,127][04851] Saving new best policy, reward=4.597!
[2025-07-07 11:05:04,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3659.1). Total num frames: 274432. Throughput: 0: 1005.6. Samples: 67056. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:05:04,124][04410] Avg episode reward: [(0, '4.547')]
[2025-07-07 11:05:08,410][04865] Updated weights for policy 0, policy_version 70 (0.0012)
[2025-07-07 11:05:09,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3584.0). Total num frames: 286720. Throughput: 0: 976.4. Samples: 71960. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:05:09,119][04410] Avg episode reward: [(0, '4.441')]
[2025-07-07 11:05:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3662.3). Total num frames: 311296. Throughput: 0: 1002.7. Samples: 75240. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:05:14,121][04410] Avg episode reward: [(0, '4.467')]
[2025-07-07 11:05:17,767][04865] Updated weights for policy 0, policy_version 80 (0.0012)
[2025-07-07 11:05:19,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3640.9). Total num frames: 327680. Throughput: 0: 1000.6. Samples: 81874. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:05:19,119][04410] Avg episode reward: [(0, '4.676')]
[2025-07-07 11:05:19,194][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000081_331776.pth...
[2025-07-07 11:05:19,324][04851] Saving new best policy, reward=4.676!
[2025-07-07 11:05:24,118][04410] Fps is (10 sec: 3685.8, 60 sec: 3959.4, 300 sec: 3664.8). Total num frames: 348160. Throughput: 0: 979.2. Samples: 86690. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:05:24,131][04410] Avg episode reward: [(0, '4.566')]
[2025-07-07 11:05:28,643][04865] Updated weights for policy 0, policy_version 90 (0.0015)
[2025-07-07 11:05:29,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3686.4). Total num frames: 368640. Throughput: 0: 992.3. Samples: 89958. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:05:29,118][04410] Avg episode reward: [(0, '4.426')]
[2025-07-07 11:05:34,116][04410] Fps is (10 sec: 3687.0, 60 sec: 3891.2, 300 sec: 3666.9). Total num frames: 385024. Throughput: 0: 985.6. Samples: 96242. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:05:34,118][04410] Avg episode reward: [(0, '4.486')]
[2025-07-07 11:05:39,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3686.4). Total num frames: 405504. Throughput: 0: 984.2. Samples: 101484. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:05:39,120][04410] Avg episode reward: [(0, '4.383')]
[2025-07-07 11:05:39,601][04865] Updated weights for policy 0, policy_version 100 (0.0020)
[2025-07-07 11:05:44,121][04410] Fps is (10 sec: 4503.7, 60 sec: 3959.2, 300 sec: 3739.7). Total num frames: 430080. Throughput: 0: 985.0. Samples: 104712. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:05:44,122][04410] Avg episode reward: [(0, '4.286')]
[2025-07-07 11:05:49,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3686.4). Total num frames: 442368. Throughput: 0: 961.8. Samples: 110336. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:05:49,118][04410] Avg episode reward: [(0, '4.290')]
[2025-07-07 11:05:51,985][04865] Updated weights for policy 0, policy_version 110 (0.0021)
[2025-07-07 11:05:54,116][04410] Fps is (10 sec: 2868.4, 60 sec: 3822.9, 300 sec: 3670.0). Total num frames: 458752. Throughput: 0: 942.7. Samples: 114380. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:05:54,118][04410] Avg episode reward: [(0, '4.485')]
[2025-07-07 11:05:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3686.4). Total num frames: 479232. Throughput: 0: 941.5. Samples: 117608. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:05:59,118][04410] Avg episode reward: [(0, '4.479')]
[2025-07-07 11:06:01,729][04865] Updated weights for policy 0, policy_version 120 (0.0012)
[2025-07-07 11:06:04,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3671.2). Total num frames: 495616. Throughput: 0: 930.4. Samples: 123742. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:06:04,118][04410] Avg episode reward: [(0, '4.856')]
[2025-07-07 11:06:04,119][04851] Saving new best policy, reward=4.856!
[2025-07-07 11:06:09,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3686.4). Total num frames: 516096. Throughput: 0: 939.8. Samples: 128980. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:06:09,125][04410] Avg episode reward: [(0, '4.905')]
[2025-07-07 11:06:09,134][04851] Saving new best policy, reward=4.905!
[2025-07-07 11:06:12,617][04865] Updated weights for policy 0, policy_version 130 (0.0016)
[2025-07-07 11:06:14,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3700.5). Total num frames: 536576. Throughput: 0: 937.3. Samples: 132138. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:06:14,120][04410] Avg episode reward: [(0, '4.406')]
[2025-07-07 11:06:19,118][04410] Fps is (10 sec: 3685.7, 60 sec: 3754.5, 300 sec: 3686.4). Total num frames: 552960. Throughput: 0: 924.9. Samples: 137862. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:06:19,120][04410] Avg episode reward: [(0, '4.601')]
[2025-07-07 11:06:23,346][04865] Updated weights for policy 0, policy_version 140 (0.0012)
[2025-07-07 11:06:24,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3754.8, 300 sec: 3699.6). Total num frames: 573440. Throughput: 0: 934.6. Samples: 143540. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0)
[2025-07-07 11:06:24,118][04410] Avg episode reward: [(0, '4.688')]
[2025-07-07 11:06:29,117][04410] Fps is (10 sec: 4506.4, 60 sec: 3822.9, 300 sec: 3737.6). Total num frames: 598016. Throughput: 0: 935.4. Samples: 146800. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:06:29,118][04410] Avg episode reward: [(0, '4.582')]
[2025-07-07 11:06:34,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3698.8). Total num frames: 610304. Throughput: 0: 931.7. Samples: 152264. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:06:34,118][04410] Avg episode reward: [(0, '4.570')]
[2025-07-07 11:06:34,350][04865] Updated weights for policy 0, policy_version 150 (0.0012)
[2025-07-07 11:06:39,116][04410] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3734.6). Total num frames: 634880. Throughput: 0: 979.3. Samples: 158450. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:06:39,118][04410] Avg episode reward: [(0, '4.685')]
[2025-07-07 11:06:43,471][04865] Updated weights for policy 0, policy_version 160 (0.0012)
[2025-07-07 11:06:44,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3754.9, 300 sec: 3744.9). Total num frames: 655360. Throughput: 0: 982.0. Samples: 161796. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:06:44,118][04410] Avg episode reward: [(0, '4.664')]
[2025-07-07 11:06:49,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3731.9). Total num frames: 671744. Throughput: 0: 954.1. Samples: 166678. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:06:49,118][04410] Avg episode reward: [(0, '4.715')]
[2025-07-07 11:06:54,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3741.8). Total num frames: 692224. Throughput: 0: 983.1. Samples: 173220. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:06:54,121][04410] Avg episode reward: [(0, '4.796')]
[2025-07-07 11:06:54,521][04865] Updated weights for policy 0, policy_version 170 (0.0013)
[2025-07-07 11:06:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3729.5). Total num frames: 708608. Throughput: 0: 981.9. Samples: 176322. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:06:59,119][04410] Avg episode reward: [(0, '4.725')]
[2025-07-07 11:07:04,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3738.9). Total num frames: 729088. Throughput: 0: 958.0. Samples: 180972. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:07:04,118][04410] Avg episode reward: [(0, '4.717')]
[2025-07-07 11:07:05,895][04865] Updated weights for policy 0, policy_version 180 (0.0013)
[2025-07-07 11:07:09,116][04410] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3747.8). Total num frames: 749568. Throughput: 0: 973.1. Samples: 187328. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:07:09,118][04410] Avg episode reward: [(0, '4.801')]
[2025-07-07 11:07:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3736.4). Total num frames: 765952. Throughput: 0: 971.9. Samples: 190534. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:07:14,120][04410] Avg episode reward: [(0, '4.585')]
[2025-07-07 11:07:16,898][04865] Updated weights for policy 0, policy_version 190 (0.0013)
[2025-07-07 11:07:19,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3744.9). Total num frames: 786432. Throughput: 0: 957.2. Samples: 195340. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:07:19,118][04410] Avg episode reward: [(0, '4.610')]
[2025-07-07 11:07:19,124][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000192_786432.pth...
[2025-07-07 11:07:24,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3753.1). Total num frames: 806912. Throughput: 0: 962.8. Samples: 201774. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:07:24,119][04410] Avg episode reward: [(0, '4.759')]
[2025-07-07 11:07:27,133][04865] Updated weights for policy 0, policy_version 200 (0.0012)
[2025-07-07 11:07:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3742.3). Total num frames: 823296. Throughput: 0: 950.6. Samples: 204574. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:07:29,118][04410] Avg episode reward: [(0, '4.612')]
[2025-07-07 11:07:34,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3750.1). Total num frames: 843776. Throughput: 0: 958.0. Samples: 209786. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:07:34,118][04410] Avg episode reward: [(0, '4.531')]
[2025-07-07 11:07:37,537][04865] Updated weights for policy 0, policy_version 210 (0.0013)
[2025-07-07 11:07:39,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3775.4). Total num frames: 868352. Throughput: 0: 960.5. Samples: 216442. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:07:39,117][04410] Avg episode reward: [(0, '4.524')]
[2025-07-07 11:07:44,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3747.4). Total num frames: 880640. Throughput: 0: 944.9. Samples: 218842. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:07:44,118][04410] Avg episode reward: [(0, '4.722')]
[2025-07-07 11:07:48,260][04865] Updated weights for policy 0, policy_version 220 (0.0013)
[2025-07-07 11:07:49,116][04410] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3754.7). Total num frames: 901120. Throughput: 0: 967.3. Samples: 224502. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:07:49,121][04410] Avg episode reward: [(0, '4.571')]
[2025-07-07 11:07:54,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3761.6). Total num frames: 921600. Throughput: 0: 969.8. Samples: 230968. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2025-07-07 11:07:54,118][04410] Avg episode reward: [(0, '4.452')]
[2025-07-07 11:07:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3751.9). Total num frames: 937984. Throughput: 0: 943.8. Samples: 233004. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:07:59,120][04410] Avg episode reward: [(0, '4.429')]
[2025-07-07 11:07:59,418][04865] Updated weights for policy 0, policy_version 230 (0.0019)
[2025-07-07 11:08:04,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3774.7). Total num frames: 962560. Throughput: 0: 972.4. Samples: 239098. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:08:04,118][04410] Avg episode reward: [(0, '4.592')]
[2025-07-07 11:08:09,123][04410] Fps is (10 sec: 4093.4, 60 sec: 3822.5, 300 sec: 3765.1). Total num frames: 978944. Throughput: 0: 967.5. Samples: 245320. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:08:09,128][04410] Avg episode reward: [(0, '4.814')]
[2025-07-07 11:08:09,337][04865] Updated weights for policy 0, policy_version 240 (0.0018)
[2025-07-07 11:08:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3771.4). Total num frames: 999424. Throughput: 0: 951.2. Samples: 247380. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:08:14,121][04410] Avg episode reward: [(0, '4.815')]
[2025-07-07 11:08:19,116][04410] Fps is (10 sec: 4098.6, 60 sec: 3891.2, 300 sec: 3777.4). Total num frames: 1019904. Throughput: 0: 981.3. Samples: 253946. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:08:19,118][04410] Avg episode reward: [(0, '4.900')]
[2025-07-07 11:08:19,586][04865] Updated weights for policy 0, policy_version 250 (0.0017)
[2025-07-07 11:08:24,119][04410] Fps is (10 sec: 3685.3, 60 sec: 3822.7, 300 sec: 3768.3). Total num frames: 1036288. Throughput: 0: 961.2. Samples: 259700. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:08:24,121][04410] Avg episode reward: [(0, '5.042')]
[2025-07-07 11:08:24,123][04851] Saving new best policy, reward=5.042!
[2025-07-07 11:08:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3774.2). Total num frames: 1056768. Throughput: 0: 962.3. Samples: 262146. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:08:29,121][04410] Avg episode reward: [(0, '5.140')]
[2025-07-07 11:08:29,127][04851] Saving new best policy, reward=5.140!
[2025-07-07 11:08:30,354][04865] Updated weights for policy 0, policy_version 260 (0.0018)
[2025-07-07 11:08:34,116][04410] Fps is (10 sec: 4507.0, 60 sec: 3959.5, 300 sec: 3794.2). Total num frames: 1081344. Throughput: 0: 984.9. Samples: 268822. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:08:34,121][04410] Avg episode reward: [(0, '5.043')]
[2025-07-07 11:08:39,121][04410] Fps is (10 sec: 4094.2, 60 sec: 3822.6, 300 sec: 3785.2). Total num frames: 1097728. Throughput: 0: 963.0. Samples: 274306. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:08:39,122][04410] Avg episode reward: [(0, '4.900')]
[2025-07-07 11:08:41,068][04865] Updated weights for policy 0, policy_version 270 (0.0017)
[2025-07-07 11:08:44,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3790.5). Total num frames: 1118208. Throughput: 0: 981.7. Samples: 277180. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:08:44,121][04410] Avg episode reward: [(0, '4.962')]
[2025-07-07 11:08:49,116][04410] Fps is (10 sec: 4507.6, 60 sec: 4027.7, 300 sec: 3873.9). Total num frames: 1142784. Throughput: 0: 998.0. Samples: 284010. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:08:49,122][04410] Avg episode reward: [(0, '5.213')]
[2025-07-07 11:08:49,131][04851] Saving new best policy, reward=5.213!
[2025-07-07 11:08:50,489][04865] Updated weights for policy 0, policy_version 280 (0.0012)
[2025-07-07 11:08:54,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1155072. Throughput: 0: 971.1. Samples: 289012. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:08:54,125][04410] Avg episode reward: [(0, '5.162')]
[2025-07-07 11:08:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3887.7). Total num frames: 1179648. Throughput: 0: 999.4. Samples: 292354. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:08:59,119][04410] Avg episode reward: [(0, '5.086')]
[2025-07-07 11:09:00,771][04865] Updated weights for policy 0, policy_version 290 (0.0016)
[2025-07-07 11:09:04,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1200128. Throughput: 0: 1004.4. Samples: 299146. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:09:04,119][04410] Avg episode reward: [(0, '5.276')]
[2025-07-07 11:09:04,122][04851] Saving new best policy, reward=5.276!
[2025-07-07 11:09:09,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.9, 300 sec: 3887.7). Total num frames: 1216512. Throughput: 0: 988.0. Samples: 304158. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:09:09,121][04410] Avg episode reward: [(0, '5.597')]
[2025-07-07 11:09:09,127][04851] Saving new best policy, reward=5.597!
[2025-07-07 11:09:11,304][04865] Updated weights for policy 0, policy_version 300 (0.0024)
[2025-07-07 11:09:14,118][04410] Fps is (10 sec: 3685.8, 60 sec: 3959.4, 300 sec: 3887.7). Total num frames: 1236992. Throughput: 0: 1006.6. Samples: 307444. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:09:14,121][04410] Avg episode reward: [(0, '5.911')]
[2025-07-07 11:09:14,175][04851] Saving new best policy, reward=5.911!
[2025-07-07 11:09:19,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1257472. Throughput: 0: 1006.7. Samples: 314124. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:09:19,119][04410] Avg episode reward: [(0, '6.257')]
[2025-07-07 11:09:19,135][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000307_1257472.pth...
[2025-07-07 11:09:19,236][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000081_331776.pth
[2025-07-07 11:09:19,249][04851] Saving new best policy, reward=6.257!
[2025-07-07 11:09:22,266][04865] Updated weights for policy 0, policy_version 310 (0.0013)
[2025-07-07 11:09:24,116][04410] Fps is (10 sec: 4096.6, 60 sec: 4027.9, 300 sec: 3887.7). Total num frames: 1277952. Throughput: 0: 993.1. Samples: 318990. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:09:24,120][04410] Avg episode reward: [(0, '6.511')]
[2025-07-07 11:09:24,124][04851] Saving new best policy, reward=6.511!
[2025-07-07 11:09:29,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3887.7). Total num frames: 1298432. Throughput: 0: 1001.6. Samples: 322250. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:09:29,118][04410] Avg episode reward: [(0, '6.591')]
[2025-07-07 11:09:29,123][04851] Saving new best policy, reward=6.591!
[2025-07-07 11:09:31,616][04865] Updated weights for policy 0, policy_version 320 (0.0015)
[2025-07-07 11:09:34,117][04410] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1314816. Throughput: 0: 990.5. Samples: 328582. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:09:34,118][04410] Avg episode reward: [(0, '6.454')]
[2025-07-07 11:09:39,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.8, 300 sec: 3873.8). Total num frames: 1335296. Throughput: 0: 999.0. Samples: 333968. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:09:39,120][04410] Avg episode reward: [(0, '6.062')]
[2025-07-07 11:09:42,284][04865] Updated weights for policy 0, policy_version 330 (0.0019)
[2025-07-07 11:09:44,116][04410] Fps is (10 sec: 4505.8, 60 sec: 4027.7, 300 sec: 3887.7). Total num frames: 1359872. Throughput: 0: 997.5. Samples: 337240. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:09:44,118][04410] Avg episode reward: [(0, '5.635')]
[2025-07-07 11:09:49,120][04410] Fps is (10 sec: 4094.6, 60 sec: 3891.0, 300 sec: 3887.7). Total num frames: 1376256. Throughput: 0: 977.8. Samples: 343152. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:09:49,123][04410] Avg episode reward: [(0, '5.135')]
[2025-07-07 11:09:53,130][04865] Updated weights for policy 0, policy_version 340 (0.0020)
[2025-07-07 11:09:54,116][04410] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 1392640. Throughput: 0: 984.5. Samples: 348462. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:09:54,118][04410] Avg episode reward: [(0, '4.876')]
[2025-07-07 11:09:59,116][04410] Fps is (10 sec: 3687.7, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1413120. Throughput: 0: 956.6. Samples: 350490. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:09:59,120][04410] Avg episode reward: [(0, '5.141')]
[2025-07-07 11:10:04,118][04410] Fps is (10 sec: 3276.2, 60 sec: 3754.5, 300 sec: 3859.9). Total num frames: 1425408. Throughput: 0: 930.2. Samples: 355984. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:10:04,120][04410] Avg episode reward: [(0, '5.047')]
[2025-07-07 11:10:05,240][04865] Updated weights for policy 0, policy_version 350 (0.0017)
[2025-07-07 11:10:09,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1449984. Throughput: 0: 960.0. Samples: 362188. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:10:09,121][04410] Avg episode reward: [(0, '5.424')]
[2025-07-07 11:10:14,116][04410] Fps is (10 sec: 4506.5, 60 sec: 3891.3, 300 sec: 3873.8). Total num frames: 1470464. Throughput: 0: 962.0. Samples: 365540. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:10:14,118][04410] Avg episode reward: [(0, '5.867')]
[2025-07-07 11:10:14,805][04865] Updated weights for policy 0, policy_version 360 (0.0021)
[2025-07-07 11:10:19,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 1486848. Throughput: 0: 933.7. Samples: 370600. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:10:19,118][04410] Avg episode reward: [(0, '6.156')]
[2025-07-07 11:10:24,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 1507328. Throughput: 0: 962.3. Samples: 377270. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:10:24,118][04410] Avg episode reward: [(0, '5.896')]
[2025-07-07 11:10:25,081][04865] Updated weights for policy 0, policy_version 370 (0.0014)
[2025-07-07 11:10:29,117][04410] Fps is (10 sec: 4095.9, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 1527808. Throughput: 0: 963.8. Samples: 380610. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:10:29,119][04410] Avg episode reward: [(0, '5.630')]
[2025-07-07 11:10:34,117][04410] Fps is (10 sec: 4095.6, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1548288. Throughput: 0: 946.5. Samples: 385742. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:10:34,119][04410] Avg episode reward: [(0, '5.689')]
[2025-07-07 11:10:35,597][04865] Updated weights for policy 0, policy_version 380 (0.0012)
[2025-07-07 11:10:39,116][04410] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1568768. Throughput: 0: 978.4. Samples: 392488. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:10:39,118][04410] Avg episode reward: [(0, '5.920')]
[2025-07-07 11:10:44,116][04410] Fps is (10 sec: 4096.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 1589248. Throughput: 0: 1007.2. Samples: 395814. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:10:44,118][04410] Avg episode reward: [(0, '6.455')]
[2025-07-07 11:10:46,274][04865] Updated weights for policy 0, policy_version 390 (0.0015)
[2025-07-07 11:10:49,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.4, 300 sec: 3901.6). Total num frames: 1609728. Throughput: 0: 997.1. Samples: 400852. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:10:49,120][04410] Avg episode reward: [(0, '6.134')]
[2025-07-07 11:10:54,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1630208. Throughput: 0: 1008.6. Samples: 407574. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:10:54,118][04410] Avg episode reward: [(0, '6.125')]
[2025-07-07 11:10:55,567][04865] Updated weights for policy 0, policy_version 400 (0.0015)
[2025-07-07 11:10:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 1646592. Throughput: 0: 997.9. Samples: 410446. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:10:59,118][04410] Avg episode reward: [(0, '6.341')]
[2025-07-07 11:11:04,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.9, 300 sec: 3901.6). Total num frames: 1667072. Throughput: 0: 1008.4. Samples: 415978. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:11:04,118][04410] Avg episode reward: [(0, '6.946')]
[2025-07-07 11:11:04,122][04851] Saving new best policy, reward=6.946!
[2025-07-07 11:11:06,223][04865] Updated weights for policy 0, policy_version 410 (0.0015)
[2025-07-07 11:11:09,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 1691648. Throughput: 0: 1008.2. Samples: 422638. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:11:09,123][04410] Avg episode reward: [(0, '7.405')]
[2025-07-07 11:11:09,134][04851] Saving new best policy, reward=7.405!
[2025-07-07 11:11:14,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1708032. Throughput: 0: 985.8. Samples: 424972. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:11:14,121][04410] Avg episode reward: [(0, '7.070')]
[2025-07-07 11:11:16,865][04865] Updated weights for policy 0, policy_version 420 (0.0013)
[2025-07-07 11:11:19,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 1728512. Throughput: 0: 1004.9. Samples: 430962. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:11:19,118][04410] Avg episode reward: [(0, '7.538')]
[2025-07-07 11:11:19,124][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000422_1728512.pth...
[2025-07-07 11:11:19,229][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000192_786432.pth
[2025-07-07 11:11:19,239][04851] Saving new best policy, reward=7.538!
[2025-07-07 11:11:24,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 1748992. Throughput: 0: 995.7. Samples: 437294. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:11:24,118][04410] Avg episode reward: [(0, '7.507')]
[2025-07-07 11:11:27,722][04865] Updated weights for policy 0, policy_version 430 (0.0015)
[2025-07-07 11:11:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1765376. Throughput: 0: 968.0. Samples: 439374. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:11:29,126][04410] Avg episode reward: [(0, '7.812')]
[2025-07-07 11:11:29,131][04851] Saving new best policy, reward=7.812!
[2025-07-07 11:11:34,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1785856. Throughput: 0: 999.5. Samples: 445830. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:11:34,120][04410] Avg episode reward: [(0, '7.311')]
[2025-07-07 11:11:36,892][04865] Updated weights for policy 0, policy_version 440 (0.0012)
[2025-07-07 11:11:39,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1806336. Throughput: 0: 985.8. Samples: 451934. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:11:39,119][04410] Avg episode reward: [(0, '7.482')]
[2025-07-07 11:11:44,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1826816. Throughput: 0: 973.7. Samples: 454262. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:11:44,118][04410] Avg episode reward: [(0, '7.423')]
[2025-07-07 11:11:47,520][04865] Updated weights for policy 0, policy_version 450 (0.0015)
[2025-07-07 11:11:49,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1847296. Throughput: 0: 1001.7. Samples: 461056. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:11:49,121][04410] Avg episode reward: [(0, '7.922')]
[2025-07-07 11:11:49,127][04851] Saving new best policy, reward=7.922!
[2025-07-07 11:11:54,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 1863680. Throughput: 0: 976.5. Samples: 466582. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:11:54,118][04410] Avg episode reward: [(0, '8.334')]
[2025-07-07 11:11:54,123][04851] Saving new best policy, reward=8.334!
[2025-07-07 11:11:58,331][04865] Updated weights for policy 0, policy_version 460 (0.0014)
[2025-07-07 11:11:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1884160. Throughput: 0: 983.0. Samples: 469206. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:11:59,121][04410] Avg episode reward: [(0, '8.886')]
[2025-07-07 11:11:59,132][04851] Saving new best policy, reward=8.886!
[2025-07-07 11:12:04,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3929.4). Total num frames: 1908736. Throughput: 0: 997.4. Samples: 475846. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:12:04,121][04410] Avg episode reward: [(0, '9.472')]
[2025-07-07 11:12:04,125][04851] Saving new best policy, reward=9.472!
[2025-07-07 11:12:09,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 1921024. Throughput: 0: 973.4. Samples: 481098. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:12:09,118][04410] Avg episode reward: [(0, '9.213')]
[2025-07-07 11:12:09,181][04865] Updated weights for policy 0, policy_version 470 (0.0021)
[2025-07-07 11:12:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 1945600. Throughput: 0: 996.3. Samples: 484208. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:12:14,118][04410] Avg episode reward: [(0, '8.857')]
[2025-07-07 11:12:18,185][04865] Updated weights for policy 0, policy_version 480 (0.0019)
[2025-07-07 11:12:19,117][04410] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 1966080. Throughput: 0: 1000.6. Samples: 490856. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:12:19,120][04410] Avg episode reward: [(0, '8.354')]
[2025-07-07 11:12:24,117][04410] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 1982464. Throughput: 0: 976.2. Samples: 495862. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:12:24,118][04410] Avg episode reward: [(0, '9.068')]
[2025-07-07 11:12:29,050][04865] Updated weights for policy 0, policy_version 490 (0.0013)
[2025-07-07 11:12:29,116][04410] Fps is (10 sec: 4096.1, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2007040. Throughput: 0: 998.3. Samples: 499186. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:12:29,120][04410] Avg episode reward: [(0, '9.879')]
[2025-07-07 11:12:29,127][04851] Saving new best policy, reward=9.879!
[2025-07-07 11:12:34,116][04410] Fps is (10 sec: 4505.7, 60 sec: 4027.7, 300 sec: 3929.4). Total num frames: 2027520. Throughput: 0: 996.5. Samples: 505898. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:12:34,125][04410] Avg episode reward: [(0, '10.619')]
[2025-07-07 11:12:34,127][04851] Saving new best policy, reward=10.619!
[2025-07-07 11:12:39,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2043904. Throughput: 0: 983.8. Samples: 510854. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:12:39,121][04410] Avg episode reward: [(0, '10.121')]
[2025-07-07 11:12:39,659][04865] Updated weights for policy 0, policy_version 500 (0.0012)
[2025-07-07 11:12:44,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2064384. Throughput: 0: 1000.9. Samples: 514246. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:12:44,121][04410] Avg episode reward: [(0, '11.016')]
[2025-07-07 11:12:44,124][04851] Saving new best policy, reward=11.016!
[2025-07-07 11:12:49,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2084864. Throughput: 0: 993.0. Samples: 520532. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:12:49,118][04410] Avg episode reward: [(0, '10.292')]
[2025-07-07 11:12:50,195][04865] Updated weights for policy 0, policy_version 510 (0.0018)
[2025-07-07 11:12:54,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2101248. Throughput: 0: 989.5. Samples: 525624. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:12:54,120][04410] Avg episode reward: [(0, '11.231')]
[2025-07-07 11:12:54,128][04851] Saving new best policy, reward=11.231!
[2025-07-07 11:12:59,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2125824. Throughput: 0: 993.4. Samples: 528910. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:12:59,119][04410] Avg episode reward: [(0, '11.077')]
[2025-07-07 11:12:59,899][04865] Updated weights for policy 0, policy_version 520 (0.0014)
[2025-07-07 11:13:04,117][04410] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3943.4). Total num frames: 2142208. Throughput: 0: 977.3. Samples: 534836. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:13:04,119][04410] Avg episode reward: [(0, '11.916')]
[2025-07-07 11:13:04,124][04851] Saving new best policy, reward=11.916!
[2025-07-07 11:13:09,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2162688. Throughput: 0: 990.7. Samples: 540442. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:13:09,121][04410] Avg episode reward: [(0, '12.971')]
[2025-07-07 11:13:09,128][04851] Saving new best policy, reward=12.971!
[2025-07-07 11:13:10,759][04865] Updated weights for policy 0, policy_version 530 (0.0023)
[2025-07-07 11:13:14,116][04410] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2183168. Throughput: 0: 990.4. Samples: 543754. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:13:14,118][04410] Avg episode reward: [(0, '12.860')]
[2025-07-07 11:13:19,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 2199552. Throughput: 0: 966.2. Samples: 549378. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:13:19,118][04410] Avg episode reward: [(0, '13.478')]
[2025-07-07 11:13:19,126][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000537_2199552.pth...
[2025-07-07 11:13:19,225][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000307_1257472.pth
[2025-07-07 11:13:19,236][04851] Saving new best policy, reward=13.478!
[2025-07-07 11:13:21,648][04865] Updated weights for policy 0, policy_version 540 (0.0014)
[2025-07-07 11:13:24,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2220032. Throughput: 0: 985.6. Samples: 555204. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:13:24,125][04410] Avg episode reward: [(0, '14.936')]
[2025-07-07 11:13:24,128][04851] Saving new best policy, reward=14.936!
[2025-07-07 11:13:29,123][04410] Fps is (10 sec: 4093.4, 60 sec: 3890.8, 300 sec: 3929.3). Total num frames: 2240512. Throughput: 0: 982.0. Samples: 558444. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:13:29,124][04410] Avg episode reward: [(0, '14.756')]
[2025-07-07 11:13:32,206][04865] Updated weights for policy 0, policy_version 550 (0.0015)
[2025-07-07 11:13:34,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3929.4). Total num frames: 2256896. Throughput: 0: 959.2. Samples: 563698. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:13:34,118][04410] Avg episode reward: [(0, '15.400')]
[2025-07-07 11:13:34,119][04851] Saving new best policy, reward=15.400!
[2025-07-07 11:13:39,116][04410] Fps is (10 sec: 4098.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2281472. Throughput: 0: 989.9. Samples: 570168. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:13:39,118][04410] Avg episode reward: [(0, '16.528')]
[2025-07-07 11:13:39,133][04851] Saving new best policy, reward=16.528!
[2025-07-07 11:13:41,641][04865] Updated weights for policy 0, policy_version 560 (0.0019)
[2025-07-07 11:13:44,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 2301952. Throughput: 0: 990.9. Samples: 573502. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:13:44,125][04410] Avg episode reward: [(0, '16.875')]
[2025-07-07 11:13:44,126][04851] Saving new best policy, reward=16.875!
[2025-07-07 11:13:49,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 2318336. Throughput: 0: 970.8. Samples: 578524. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:13:49,118][04410] Avg episode reward: [(0, '18.521')]
[2025-07-07 11:13:49,123][04851] Saving new best policy, reward=18.521!
[2025-07-07 11:13:52,351][04865] Updated weights for policy 0, policy_version 570 (0.0012)
[2025-07-07 11:13:54,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 2338816. Throughput: 0: 992.4. Samples: 585098. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:13:54,118][04410] Avg episode reward: [(0, '19.531')]
[2025-07-07 11:13:54,121][04851] Saving new best policy, reward=19.531!
[2025-07-07 11:13:59,121][04410] Fps is (10 sec: 3684.5, 60 sec: 3822.6, 300 sec: 3915.4). Total num frames: 2355200. Throughput: 0: 991.4. Samples: 588374. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:13:59,128][04410] Avg episode reward: [(0, '18.609')]
[2025-07-07 11:14:04,117][04410] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 2371584. Throughput: 0: 947.6. Samples: 592018. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:14:04,121][04410] Avg episode reward: [(0, '17.983')]
[2025-07-07 11:14:04,658][04865] Updated weights for policy 0, policy_version 580 (0.0028)
[2025-07-07 11:14:09,116][04410] Fps is (10 sec: 3688.3, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 2392064. Throughput: 0: 959.7. Samples: 598390. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:14:09,119][04410] Avg episode reward: [(0, '17.484')]
[2025-07-07 11:14:14,117][04410] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 2412544. Throughput: 0: 962.3. Samples: 601740. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:14:14,118][04410] Avg episode reward: [(0, '18.564')]
[2025-07-07 11:14:14,413][04865] Updated weights for policy 0, policy_version 590 (0.0014)
[2025-07-07 11:14:19,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 2433024. Throughput: 0: 957.1. Samples: 606768. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:14:19,118][04410] Avg episode reward: [(0, '19.893')]
[2025-07-07 11:14:19,124][04851] Saving new best policy, reward=19.893!
[2025-07-07 11:14:24,116][04410] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 2453504. Throughput: 0: 961.1. Samples: 613416. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:14:24,118][04410] Avg episode reward: [(0, '19.879')]
[2025-07-07 11:14:24,585][04865] Updated weights for policy 0, policy_version 600 (0.0017)
[2025-07-07 11:14:29,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.6, 300 sec: 3929.4). Total num frames: 2473984. Throughput: 0: 960.8. Samples: 616740. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:14:29,118][04410] Avg episode reward: [(0, '20.487')]
[2025-07-07 11:14:29,129][04851] Saving new best policy, reward=20.487!
[2025-07-07 11:14:34,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 2490368. Throughput: 0: 961.2. Samples: 621778. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:14:34,118][04410] Avg episode reward: [(0, '21.242')]
[2025-07-07 11:14:34,119][04851] Saving new best policy, reward=21.242!
[2025-07-07 11:14:35,304][04865] Updated weights for policy 0, policy_version 610 (0.0023)
[2025-07-07 11:14:39,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 2514944. Throughput: 0: 964.1. Samples: 628484. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:14:39,122][04410] Avg episode reward: [(0, '19.036')]
[2025-07-07 11:14:44,119][04410] Fps is (10 sec: 4095.0, 60 sec: 3822.8, 300 sec: 3915.5). Total num frames: 2531328. Throughput: 0: 961.5. Samples: 631638. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:14:44,120][04410] Avg episode reward: [(0, '18.026')]
[2025-07-07 11:14:45,824][04865] Updated weights for policy 0, policy_version 620 (0.0018)
[2025-07-07 11:14:49,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 2551808. Throughput: 0: 995.4. Samples: 636812. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:14:49,122][04410] Avg episode reward: [(0, '19.279')]
[2025-07-07 11:14:54,116][04410] Fps is (10 sec: 4506.7, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2576384. Throughput: 0: 1003.1. Samples: 643530. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:14:54,118][04410] Avg episode reward: [(0, '17.269')]
[2025-07-07 11:14:55,231][04865] Updated weights for policy 0, policy_version 630 (0.0021)
[2025-07-07 11:14:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.5, 300 sec: 3943.3). Total num frames: 2588672. Throughput: 0: 990.4. Samples: 646310. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:14:59,118][04410] Avg episode reward: [(0, '17.572')]
[2025-07-07 11:15:04,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2613248. Throughput: 0: 1002.4. Samples: 651876. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:15:04,121][04410] Avg episode reward: [(0, '18.345')]
[2025-07-07 11:15:05,851][04865] Updated weights for policy 0, policy_version 640 (0.0014)
[2025-07-07 11:15:09,117][04410] Fps is (10 sec: 4505.5, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2633728. Throughput: 0: 1004.6. Samples: 658622. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:15:09,118][04410] Avg episode reward: [(0, '18.956')]
[2025-07-07 11:15:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2650112. Throughput: 0: 979.9. Samples: 660836. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0)
[2025-07-07 11:15:14,118][04410] Avg episode reward: [(0, '19.086')]
[2025-07-07 11:15:16,315][04865] Updated weights for policy 0, policy_version 650 (0.0018)
[2025-07-07 11:15:19,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2670592. Throughput: 0: 1004.4. Samples: 666974. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:15:19,122][04410] Avg episode reward: [(0, '19.090')]
[2025-07-07 11:15:19,201][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000653_2674688.pth...
[2025-07-07 11:15:19,292][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000422_1728512.pth
[2025-07-07 11:15:24,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2691072. Throughput: 0: 996.5. Samples: 673326. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:15:24,121][04410] Avg episode reward: [(0, '17.863')]
[2025-07-07 11:15:27,130][04865] Updated weights for policy 0, policy_version 660 (0.0022)
[2025-07-07 11:15:29,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2711552. Throughput: 0: 972.1. Samples: 675378. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:15:29,118][04410] Avg episode reward: [(0, '18.698')]
[2025-07-07 11:15:34,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2732032. Throughput: 0: 1006.0. Samples: 682084. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:15:34,121][04410] Avg episode reward: [(0, '18.830')]
[2025-07-07 11:15:36,280][04865] Updated weights for policy 0, policy_version 670 (0.0012)
[2025-07-07 11:15:39,118][04410] Fps is (10 sec: 4095.3, 60 sec: 3959.3, 300 sec: 3943.2). Total num frames: 2752512. Throughput: 0: 988.1. Samples: 687998. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:15:39,120][04410] Avg episode reward: [(0, '19.310')]
[2025-07-07 11:15:44,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3929.4). Total num frames: 2768896. Throughput: 0: 980.9. Samples: 690450. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:15:44,118][04410] Avg episode reward: [(0, '19.760')]
[2025-07-07 11:15:46,820][04865] Updated weights for policy 0, policy_version 680 (0.0013)
[2025-07-07 11:15:49,116][04410] Fps is (10 sec: 4096.7, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2793472. Throughput: 0: 1007.2. Samples: 697202. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:15:49,118][04410] Avg episode reward: [(0, '19.609')]
[2025-07-07 11:15:54,119][04410] Fps is (10 sec: 4094.9, 60 sec: 3891.0, 300 sec: 3943.2). Total num frames: 2809856. Throughput: 0: 979.9. Samples: 702718. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:15:54,120][04410] Avg episode reward: [(0, '19.437')]
[2025-07-07 11:15:57,478][04865] Updated weights for policy 0, policy_version 690 (0.0012)
[2025-07-07 11:15:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2830336. Throughput: 0: 994.9. Samples: 705608. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:15:59,120][04410] Avg episode reward: [(0, '20.390')]
[2025-07-07 11:16:04,116][04410] Fps is (10 sec: 4506.8, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2854912. Throughput: 0: 1006.4. Samples: 712264. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:16:04,120][04410] Avg episode reward: [(0, '20.724')]
[2025-07-07 11:16:07,836][04865] Updated weights for policy 0, policy_version 700 (0.0027)
[2025-07-07 11:16:09,118][04410] Fps is (10 sec: 3686.0, 60 sec: 3891.1, 300 sec: 3929.4). Total num frames: 2867200. Throughput: 0: 979.2. Samples: 717390. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0)
[2025-07-07 11:16:09,121][04410] Avg episode reward: [(0, '19.946')]
[2025-07-07 11:16:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2891776. Throughput: 0: 1006.1. Samples: 720654. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:16:14,118][04410] Avg episode reward: [(0, '21.574')]
[2025-07-07 11:16:14,119][04851] Saving new best policy, reward=21.574!
[2025-07-07 11:16:17,446][04865] Updated weights for policy 0, policy_version 710 (0.0013)
[2025-07-07 11:16:19,117][04410] Fps is (10 sec: 4506.0, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2912256. Throughput: 0: 1005.5. Samples: 727332. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:16:19,120][04410] Avg episode reward: [(0, '22.557')]
[2025-07-07 11:16:19,132][04851] Saving new best policy, reward=22.557!
[2025-07-07 11:16:24,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2928640. Throughput: 0: 981.2. Samples: 732152. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:16:24,118][04410] Avg episode reward: [(0, '22.656')]
[2025-07-07 11:16:24,119][04851] Saving new best policy, reward=22.656!
[2025-07-07 11:16:28,203][04865] Updated weights for policy 0, policy_version 720 (0.0018)
[2025-07-07 11:16:29,116][04410] Fps is (10 sec: 4096.1, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 2953216. Throughput: 0: 1000.7. Samples: 735480. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:16:29,118][04410] Avg episode reward: [(0, '22.974')]
[2025-07-07 11:16:29,123][04851] Saving new best policy, reward=22.974!
[2025-07-07 11:16:34,121][04410] Fps is (10 sec: 4094.0, 60 sec: 3959.1, 300 sec: 3943.2). Total num frames: 2969600. Throughput: 0: 1000.3. Samples: 742220. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:16:34,123][04410] Avg episode reward: [(0, '24.588')]
[2025-07-07 11:16:34,124][04851] Saving new best policy, reward=24.588!
[2025-07-07 11:16:38,874][04865] Updated weights for policy 0, policy_version 730 (0.0017)
[2025-07-07 11:16:39,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3943.3). Total num frames: 2990080. Throughput: 0: 988.1. Samples: 747178. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:16:39,121][04410] Avg episode reward: [(0, '24.610')]
[2025-07-07 11:16:39,128][04851] Saving new best policy, reward=24.610!
[2025-07-07 11:16:44,116][04410] Fps is (10 sec: 4098.0, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 3010560. Throughput: 0: 997.1. Samples: 750476. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:16:44,118][04410] Avg episode reward: [(0, '24.709')]
[2025-07-07 11:16:44,120][04851] Saving new best policy, reward=24.709!
[2025-07-07 11:16:49,117][04410] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3026944. Throughput: 0: 991.8. Samples: 756894. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:16:49,121][04410] Avg episode reward: [(0, '23.523')]
[2025-07-07 11:16:49,161][04865] Updated weights for policy 0, policy_version 740 (0.0012)
[2025-07-07 11:16:54,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3943.3). Total num frames: 3047424. Throughput: 0: 995.7. Samples: 762194. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:16:54,118][04410] Avg episode reward: [(0, '22.351')]
[2025-07-07 11:16:58,744][04865] Updated weights for policy 0, policy_version 750 (0.0016)
[2025-07-07 11:16:59,116][04410] Fps is (10 sec: 4505.8, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 3072000. Throughput: 0: 997.9. Samples: 765560. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:16:59,118][04410] Avg episode reward: [(0, '21.328')]
[2025-07-07 11:17:04,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 3088384. Throughput: 0: 982.2. Samples: 771532. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:17:04,118][04410] Avg episode reward: [(0, '23.128')]
[2025-07-07 11:17:09,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.8, 300 sec: 3943.3). Total num frames: 3108864. Throughput: 0: 1005.6. Samples: 777404. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:17:09,121][04410] Avg episode reward: [(0, '22.824')]
[2025-07-07 11:17:09,462][04865] Updated weights for policy 0, policy_version 760 (0.0015)
[2025-07-07 11:17:14,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3133440. Throughput: 0: 1005.9. Samples: 780746. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:17:14,120][04410] Avg episode reward: [(0, '22.417')]
[2025-07-07 11:17:19,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3145728. Throughput: 0: 979.6. Samples: 786298. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:17:19,118][04410] Avg episode reward: [(0, '22.653')]
[2025-07-07 11:17:19,125][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000768_3145728.pth...
[2025-07-07 11:17:19,240][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000537_2199552.pth
[2025-07-07 11:17:20,008][04865] Updated weights for policy 0, policy_version 770 (0.0021)
[2025-07-07 11:17:24,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 3170304. Throughput: 0: 1004.3. Samples: 792370. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:17:24,118][04410] Avg episode reward: [(0, '22.325')]
[2025-07-07 11:17:29,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3190784. Throughput: 0: 1005.8. Samples: 795738. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:17:29,121][04410] Avg episode reward: [(0, '21.437')]
[2025-07-07 11:17:29,951][04865] Updated weights for policy 0, policy_version 780 (0.0013)
[2025-07-07 11:17:34,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.8, 300 sec: 3943.3). Total num frames: 3207168. Throughput: 0: 976.8. Samples: 800850. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:17:34,118][04410] Avg episode reward: [(0, '20.974')]
[2025-07-07 11:17:39,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3231744. Throughput: 0: 1005.6. Samples: 807446. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:17:39,118][04410] Avg episode reward: [(0, '21.031')]
[2025-07-07 11:17:39,894][04865] Updated weights for policy 0, policy_version 790 (0.0014)
[2025-07-07 11:17:44,121][04410] Fps is (10 sec: 4094.2, 60 sec: 3959.2, 300 sec: 3943.2). Total num frames: 3248128. Throughput: 0: 1005.5. Samples: 810812. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:17:44,126][04410] Avg episode reward: [(0, '21.772')]
[2025-07-07 11:17:49,117][04410] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 3957.1). Total num frames: 3268608. Throughput: 0: 983.8. Samples: 815802. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:17:49,118][04410] Avg episode reward: [(0, '21.641')]
[2025-07-07 11:17:50,762][04865] Updated weights for policy 0, policy_version 800 (0.0012)
[2025-07-07 11:17:54,116][04410] Fps is (10 sec: 4097.8, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 3289088. Throughput: 0: 1002.3. Samples: 822508. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:17:54,119][04410] Avg episode reward: [(0, '21.531')]
[2025-07-07 11:17:59,116][04410] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3309568. Throughput: 0: 1001.7. Samples: 825822. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:17:59,121][04410] Avg episode reward: [(0, '22.016')]
[2025-07-07 11:18:01,486][04865] Updated weights for policy 0, policy_version 810 (0.0016)
[2025-07-07 11:18:04,119][04410] Fps is (10 sec: 3276.1, 60 sec: 3891.1, 300 sec: 3929.4). Total num frames: 3321856. Throughput: 0: 984.0. Samples: 830578. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:18:04,121][04410] Avg episode reward: [(0, '21.659')]
[2025-07-07 11:18:09,116][04410] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 3342336. Throughput: 0: 967.8. Samples: 835922. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:18:09,118][04410] Avg episode reward: [(0, '21.811')]
[2025-07-07 11:18:12,759][04865] Updated weights for policy 0, policy_version 820 (0.0027)
[2025-07-07 11:18:14,116][04410] Fps is (10 sec: 3687.2, 60 sec: 3754.7, 300 sec: 3929.4). Total num frames: 3358720. Throughput: 0: 959.7. Samples: 838924. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:18:14,118][04410] Avg episode reward: [(0, '23.099')]
[2025-07-07 11:18:19,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3383296. Throughput: 0: 965.7. Samples: 844308. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:18:19,118][04410] Avg episode reward: [(0, '23.799')]
[2025-07-07 11:18:22,653][04865] Updated weights for policy 0, policy_version 830 (0.0014)
[2025-07-07 11:18:24,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3943.4). Total num frames: 3403776. Throughput: 0: 969.9. Samples: 851090. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:18:24,118][04410] Avg episode reward: [(0, '24.653')]
[2025-07-07 11:18:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3943.3). Total num frames: 3420160. Throughput: 0: 951.3. Samples: 853616. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:18:29,118][04410] Avg episode reward: [(0, '24.997')]
[2025-07-07 11:18:29,126][04851] Saving new best policy, reward=24.997!
[2025-07-07 11:18:33,376][04865] Updated weights for policy 0, policy_version 840 (0.0021)
[2025-07-07 11:18:34,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 3440640. Throughput: 0: 969.4. Samples: 859424. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:18:34,118][04410] Avg episode reward: [(0, '25.701')]
[2025-07-07 11:18:34,119][04851] Saving new best policy, reward=25.701!
[2025-07-07 11:18:39,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3465216. Throughput: 0: 968.6. Samples: 866094. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:18:39,118][04410] Avg episode reward: [(0, '25.883')]
[2025-07-07 11:18:39,136][04851] Saving new best policy, reward=25.883!
[2025-07-07 11:18:44,009][04865] Updated weights for policy 0, policy_version 850 (0.0016)
[2025-07-07 11:18:44,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.5, 300 sec: 3943.3). Total num frames: 3481600. Throughput: 0: 941.9. Samples: 868206. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0)
[2025-07-07 11:18:44,118][04410] Avg episode reward: [(0, '26.063')]
[2025-07-07 11:18:44,122][04851] Saving new best policy, reward=26.063!
[2025-07-07 11:18:49,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3502080. Throughput: 0: 974.2. Samples: 874414. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:18:49,118][04410] Avg episode reward: [(0, '26.155')]
[2025-07-07 11:18:49,123][04851] Saving new best policy, reward=26.155!
[2025-07-07 11:18:53,745][04865] Updated weights for policy 0, policy_version 860 (0.0028)
[2025-07-07 11:18:54,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 3522560. Throughput: 0: 995.9. Samples: 880736. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:18:54,118][04410] Avg episode reward: [(0, '25.006')]
[2025-07-07 11:18:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3957.2). Total num frames: 3538944. Throughput: 0: 975.0. Samples: 882798. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:18:59,118][04410] Avg episode reward: [(0, '25.537')]
[2025-07-07 11:19:04,018][04865] Updated weights for policy 0, policy_version 870 (0.0017)
[2025-07-07 11:19:04,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.9, 300 sec: 3971.0). Total num frames: 3563520. Throughput: 0: 1002.6. Samples: 889424. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:19:04,118][04410] Avg episode reward: [(0, '24.952')]
[2025-07-07 11:19:09,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3579904. Throughput: 0: 984.2. Samples: 895380. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:19:09,121][04410] Avg episode reward: [(0, '23.885')]
[2025-07-07 11:19:14,121][04410] Fps is (10 sec: 3684.7, 60 sec: 4027.4, 300 sec: 3957.1). Total num frames: 3600384. Throughput: 0: 981.8. Samples: 897800. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:19:14,124][04410] Avg episode reward: [(0, '23.096')]
[2025-07-07 11:19:14,509][04865] Updated weights for policy 0, policy_version 880 (0.0020)
[2025-07-07 11:19:19,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3620864. Throughput: 0: 1001.3. Samples: 904484. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:19:19,118][04410] Avg episode reward: [(0, '22.205')]
[2025-07-07 11:19:19,127][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000884_3620864.pth...
[2025-07-07 11:19:19,238][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000653_2674688.pth
[2025-07-07 11:19:24,117][04410] Fps is (10 sec: 3688.0, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3637248. Throughput: 0: 975.2. Samples: 909980. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:19:24,125][04410] Avg episode reward: [(0, '21.192')]
[2025-07-07 11:19:25,369][04865] Updated weights for policy 0, policy_version 890 (0.0024)
[2025-07-07 11:19:29,117][04410] Fps is (10 sec: 4095.9, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3661824. Throughput: 0: 990.5. Samples: 912780. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:19:29,121][04410] Avg episode reward: [(0, '19.920')]
[2025-07-07 11:19:34,116][04410] Fps is (10 sec: 4505.7, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3682304. Throughput: 0: 1004.0. Samples: 919594. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:19:34,118][04410] Avg episode reward: [(0, '19.682')]
[2025-07-07 11:19:34,425][04865] Updated weights for policy 0, policy_version 900 (0.0012)
[2025-07-07 11:19:39,116][04410] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 3698688. Throughput: 0: 978.4. Samples: 924766. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:19:39,118][04410] Avg episode reward: [(0, '20.256')]
[2025-07-07 11:19:44,117][04410] Fps is (10 sec: 4095.9, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3723264. Throughput: 0: 1004.9. Samples: 928020. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:19:44,118][04410] Avg episode reward: [(0, '19.746')]
[2025-07-07 11:19:45,032][04865] Updated weights for policy 0, policy_version 910 (0.0021)
[2025-07-07 11:19:49,118][04410] Fps is (10 sec: 4505.0, 60 sec: 4027.6, 300 sec: 3957.1). Total num frames: 3743744. Throughput: 0: 1007.1. Samples: 934744. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:19:49,119][04410] Avg episode reward: [(0, '21.471')]
[2025-07-07 11:19:54,117][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.4, 300 sec: 3971.0). Total num frames: 3760128. Throughput: 0: 988.9. Samples: 939882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-07-07 11:19:54,119][04410] Avg episode reward: [(0, '22.780')]
[2025-07-07 11:19:55,791][04865] Updated weights for policy 0, policy_version 920 (0.0016)
[2025-07-07 11:19:59,116][04410] Fps is (10 sec: 3686.9, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3780608. Throughput: 0: 1007.9. Samples: 943152. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:19:59,118][04410] Avg episode reward: [(0, '23.789')]
[2025-07-07 11:20:04,117][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.4, 300 sec: 3957.1). Total num frames: 3801088. Throughput: 0: 1009.1. Samples: 949892. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-07-07 11:20:04,121][04410] Avg episode reward: [(0, '24.271')]
[2025-07-07 11:20:06,192][04865] Updated weights for policy 0, policy_version 930 (0.0015)
[2025-07-07 11:20:09,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3821568. Throughput: 0: 999.9. Samples: 954976. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-07-07 11:20:09,122][04410] Avg episode reward: [(0, '24.479')]
[2025-07-07 11:20:14,116][04410] Fps is (10 sec: 4096.2, 60 sec: 4028.0, 300 sec: 3971.0). Total num frames: 3842048. Throughput: 0: 1014.3. Samples: 958422. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:20:14,121][04410] Avg episode reward: [(0, '23.403')]
[2025-07-07 11:20:15,364][04865] Updated weights for policy 0, policy_version 940 (0.0014)
[2025-07-07 11:20:19,118][04410] Fps is (10 sec: 4095.2, 60 sec: 4027.6, 300 sec: 3971.0). Total num frames: 3862528. Throughput: 0: 1004.7. Samples: 964806. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:20:19,122][04410] Avg episode reward: [(0, '22.915')]
[2025-07-07 11:20:24,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3971.0). Total num frames: 3883008. Throughput: 0: 1010.2. Samples: 970224. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:20:24,122][04410] Avg episode reward: [(0, '22.736')]
[2025-07-07 11:20:25,826][04865] Updated weights for policy 0, policy_version 950 (0.0012)
[2025-07-07 11:20:29,116][04410] Fps is (10 sec: 4096.8, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3903488. Throughput: 0: 1011.8. Samples: 973552. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:20:29,122][04410] Avg episode reward: [(0, '22.668')]
[2025-07-07 11:20:34,118][04410] Fps is (10 sec: 3685.9, 60 sec: 3959.4, 300 sec: 3957.2). Total num frames: 3919872. Throughput: 0: 996.4. Samples: 979580. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:20:34,119][04410] Avg episode reward: [(0, '23.501')]
[2025-07-07 11:20:36,578][04865] Updated weights for policy 0, policy_version 960 (0.0022)
[2025-07-07 11:20:39,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3940352. Throughput: 0: 1011.5. Samples: 985398. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:20:39,121][04410] Avg episode reward: [(0, '24.570')]
[2025-07-07 11:20:44,116][04410] Fps is (10 sec: 4506.2, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3964928. Throughput: 0: 1013.2. Samples: 988748. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:20:44,122][04410] Avg episode reward: [(0, '25.657')]
[2025-07-07 11:20:46,282][04865] Updated weights for policy 0, policy_version 970 (0.0012)
[2025-07-07 11:20:49,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.6, 300 sec: 3971.1). Total num frames: 3981312. Throughput: 0: 987.7. Samples: 994340. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:20:49,119][04410] Avg episode reward: [(0, '25.762')]
[2025-07-07 11:20:54,117][04410] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 4001792. Throughput: 0: 1015.8. Samples: 1000686. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:20:54,118][04410] Avg episode reward: [(0, '26.249')]
[2025-07-07 11:20:54,127][04851] Saving new best policy, reward=26.249!
[2025-07-07 11:20:56,298][04865] Updated weights for policy 0, policy_version 980 (0.0012)
[2025-07-07 11:20:59,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 4022272. Throughput: 0: 1009.4. Samples: 1003844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-07-07 11:20:59,122][04410] Avg episode reward: [(0, '26.337')]
[2025-07-07 11:20:59,126][04851] Saving new best policy, reward=26.337!
[2025-07-07 11:21:04,116][04410] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3971.1). Total num frames: 4038656. Throughput: 0: 981.1. Samples: 1008954. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:21:04,118][04410] Avg episode reward: [(0, '26.193')]
[2025-07-07 11:21:06,875][04865] Updated weights for policy 0, policy_version 990 (0.0019)
[2025-07-07 11:21:09,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 4063232. Throughput: 0: 1009.0. Samples: 1015630. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:21:09,118][04410] Avg episode reward: [(0, '26.069')]
[2025-07-07 11:21:14,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 4083712. Throughput: 0: 1010.4. Samples: 1019022. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:21:14,121][04410] Avg episode reward: [(0, '25.762')]
[2025-07-07 11:21:17,468][04865] Updated weights for policy 0, policy_version 1000 (0.0011)
[2025-07-07 11:21:19,117][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3971.0). Total num frames: 4100096. Throughput: 0: 990.0. Samples: 1024130. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:21:19,118][04410] Avg episode reward: [(0, '23.299')]
[2025-07-07 11:21:19,124][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001001_4100096.pth...
[2025-07-07 11:21:19,223][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000768_3145728.pth
[2025-07-07 11:21:24,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 4124672. Throughput: 0: 1008.7. Samples: 1030788. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:21:24,118][04410] Avg episode reward: [(0, '23.117')]
[2025-07-07 11:21:26,904][04865] Updated weights for policy 0, policy_version 1010 (0.0012)
[2025-07-07 11:21:29,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3971.1). Total num frames: 4141056. Throughput: 0: 1008.0. Samples: 1034110. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:21:29,118][04410] Avg episode reward: [(0, '22.573')]
[2025-07-07 11:21:34,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.8, 300 sec: 3971.0). Total num frames: 4161536. Throughput: 0: 997.8. Samples: 1039240. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:21:34,118][04410] Avg episode reward: [(0, '22.563')]
[2025-07-07 11:21:37,201][04865] Updated weights for policy 0, policy_version 1020 (0.0018)
[2025-07-07 11:21:39,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 4182016. Throughput: 0: 1006.6. Samples: 1045984. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:21:39,125][04410] Avg episode reward: [(0, '24.236')]
[2025-07-07 11:21:44,119][04410] Fps is (10 sec: 4095.0, 60 sec: 3959.3, 300 sec: 3984.9). Total num frames: 4202496. Throughput: 0: 1003.1. Samples: 1048984. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:21:44,124][04410] Avg episode reward: [(0, '25.360')]
[2025-07-07 11:21:47,735][04865] Updated weights for policy 0, policy_version 1030 (0.0017)
[2025-07-07 11:21:49,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 4222976. Throughput: 0: 1010.7. Samples: 1054436. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:21:49,118][04410] Avg episode reward: [(0, '25.004')]
[2025-07-07 11:21:54,116][04410] Fps is (10 sec: 4097.0, 60 sec: 4027.8, 300 sec: 3971.0). Total num frames: 4243456. Throughput: 0: 1013.9. Samples: 1061254. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:21:54,118][04410] Avg episode reward: [(0, '25.504')]
[2025-07-07 11:21:58,725][04865] Updated weights for policy 0, policy_version 1040 (0.0018)
[2025-07-07 11:21:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 4259840. Throughput: 0: 993.7. Samples: 1063740. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:21:59,121][04410] Avg episode reward: [(0, '26.627')]
[2025-07-07 11:21:59,132][04851] Saving new best policy, reward=26.627!
[2025-07-07 11:22:04,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 4284416. Throughput: 0: 1006.0. Samples: 1069398. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:22:04,120][04410] Avg episode reward: [(0, '26.072')]
[2025-07-07 11:22:07,893][04865] Updated weights for policy 0, policy_version 1050 (0.0019)
[2025-07-07 11:22:09,121][04410] Fps is (10 sec: 4503.6, 60 sec: 4027.4, 300 sec: 3971.0). Total num frames: 4304896. Throughput: 0: 1007.2. Samples: 1076118. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:22:09,122][04410] Avg episode reward: [(0, '25.324')]
[2025-07-07 11:22:14,119][04410] Fps is (10 sec: 2866.5, 60 sec: 3822.8, 300 sec: 3957.1). Total num frames: 4313088. Throughput: 0: 972.3. Samples: 1077864. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:22:14,124][04410] Avg episode reward: [(0, '25.575')]
[2025-07-07 11:22:19,116][04410] Fps is (10 sec: 3278.3, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 4337664. Throughput: 0: 970.1. Samples: 1082894. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:22:19,118][04410] Avg episode reward: [(0, '26.069')]
[2025-07-07 11:22:19,854][04865] Updated weights for policy 0, policy_version 1060 (0.0013)
[2025-07-07 11:22:24,116][04410] Fps is (10 sec: 4506.7, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 4358144. Throughput: 0: 971.3. Samples: 1089694. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:22:24,118][04410] Avg episode reward: [(0, '24.773')]
[2025-07-07 11:22:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 4374528. Throughput: 0: 962.6. Samples: 1092298. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:22:29,118][04410] Avg episode reward: [(0, '24.591')]
[2025-07-07 11:22:30,493][04865] Updated weights for policy 0, policy_version 1070 (0.0015)
[2025-07-07 11:22:34,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 4395008. Throughput: 0: 969.1. Samples: 1098044. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:22:34,118][04410] Avg episode reward: [(0, '24.372')]
[2025-07-07 11:22:39,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3971.1). Total num frames: 4419584. Throughput: 0: 968.0. Samples: 1104814. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:22:39,118][04410] Avg episode reward: [(0, '24.872')]
[2025-07-07 11:22:40,176][04865] Updated weights for policy 0, policy_version 1080 (0.0016)
[2025-07-07 11:22:44,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.4, 300 sec: 3957.2). Total num frames: 4435968. Throughput: 0: 960.9. Samples: 1106980. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:22:44,118][04410] Avg episode reward: [(0, '24.926')]
[2025-07-07 11:22:49,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 4456448. Throughput: 0: 973.8. Samples: 1113218. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:22:49,118][04410] Avg episode reward: [(0, '27.250')]
[2025-07-07 11:22:49,124][04851] Saving new best policy, reward=27.250!
[2025-07-07 11:22:50,164][04865] Updated weights for policy 0, policy_version 1090 (0.0015)
[2025-07-07 11:22:54,121][04410] Fps is (10 sec: 4094.2, 60 sec: 3890.9, 300 sec: 3957.1). Total num frames: 4476928. Throughput: 0: 967.0. Samples: 1119632. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:22:54,122][04410] Avg episode reward: [(0, '28.638')]
[2025-07-07 11:22:54,124][04851] Saving new best policy, reward=28.638!
[2025-07-07 11:22:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3971.1). Total num frames: 4493312. Throughput: 0: 972.0. Samples: 1121602. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:22:59,118][04410] Avg episode reward: [(0, '28.811')]
[2025-07-07 11:22:59,133][04851] Saving new best policy, reward=28.811!
[2025-07-07 11:23:01,092][04865] Updated weights for policy 0, policy_version 1100 (0.0024)
[2025-07-07 11:23:04,116][04410] Fps is (10 sec: 4097.8, 60 sec: 3891.2, 300 sec: 3984.9). Total num frames: 4517888. Throughput: 0: 1007.5. Samples: 1128232. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:23:04,121][04410] Avg episode reward: [(0, '28.777')]
[2025-07-07 11:23:09,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3823.2, 300 sec: 3984.9). Total num frames: 4534272. Throughput: 0: 988.7. Samples: 1134186. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:23:09,118][04410] Avg episode reward: [(0, '27.938')]
[2025-07-07 11:23:11,621][04865] Updated weights for policy 0, policy_version 1110 (0.0014)
[2025-07-07 11:23:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.9, 300 sec: 3971.0). Total num frames: 4554752. Throughput: 0: 984.9. Samples: 1136620. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:23:14,121][04410] Avg episode reward: [(0, '26.835')]
[2025-07-07 11:23:19,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 4579328. Throughput: 0: 1005.0. Samples: 1143270. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:23:19,122][04410] Avg episode reward: [(0, '24.952')]
[2025-07-07 11:23:19,130][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001118_4579328.pth...
[2025-07-07 11:23:19,131][04410] No heartbeat for components: RolloutWorker_w1 (1174 seconds), RolloutWorker_w4 (1174 seconds), RolloutWorker_w6 (1174 seconds)
[2025-07-07 11:23:19,225][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000884_3620864.pth
[2025-07-07 11:23:20,991][04865] Updated weights for policy 0, policy_version 1120 (0.0017)
[2025-07-07 11:23:24,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 4595712. Throughput: 0: 975.2. Samples: 1148700. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:23:24,127][04410] Avg episode reward: [(0, '24.873')]
[2025-07-07 11:23:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 4616192. Throughput: 0: 989.1. Samples: 1151488. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-07-07 11:23:29,122][04410] Avg episode reward: [(0, '26.727')]
[2025-07-07 11:23:31,548][04865] Updated weights for policy 0, policy_version 1130 (0.0013)
[2025-07-07 11:23:34,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 4636672. Throughput: 0: 1002.3. Samples: 1158320. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:23:34,118][04410] Avg episode reward: [(0, '26.959')]
[2025-07-07 11:23:39,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 4653056. Throughput: 0: 974.4. Samples: 1163474. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:23:39,121][04410] Avg episode reward: [(0, '25.947')]
[2025-07-07 11:23:42,174][04865] Updated weights for policy 0, policy_version 1140 (0.0018)
[2025-07-07 11:23:44,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 4677632. Throughput: 0: 1001.2. Samples: 1166654. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:23:44,118][04410] Avg episode reward: [(0, '24.509')]
[2025-07-07 11:23:49,119][04410] Fps is (10 sec: 4504.5, 60 sec: 4027.6, 300 sec: 3984.9). Total num frames: 4698112. Throughput: 0: 1003.9. Samples: 1173408. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:23:49,120][04410] Avg episode reward: [(0, '26.463')]
[2025-07-07 11:23:52,922][04865] Updated weights for policy 0, policy_version 1150 (0.0012)
[2025-07-07 11:23:54,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.8, 300 sec: 3984.9). Total num frames: 4714496. Throughput: 0: 983.5. Samples: 1178442. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:23:54,120][04410] Avg episode reward: [(0, '25.330')]
[2025-07-07 11:23:59,117][04410] Fps is (10 sec: 3687.2, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 4734976. Throughput: 0: 1003.7. Samples: 1181788. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:23:59,119][04410] Avg episode reward: [(0, '24.417')]
[2025-07-07 11:24:02,010][04865] Updated weights for policy 0, policy_version 1160 (0.0013)
[2025-07-07 11:24:04,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 4755456. Throughput: 0: 1002.8. Samples: 1188396. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:24:04,123][04410] Avg episode reward: [(0, '26.356')]
[2025-07-07 11:24:09,116][04410] Fps is (10 sec: 4096.2, 60 sec: 4027.7, 300 sec: 3985.0). Total num frames: 4775936. Throughput: 0: 994.7. Samples: 1193460. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:24:09,121][04410] Avg episode reward: [(0, '26.686')]
[2025-07-07 11:24:12,633][04865] Updated weights for policy 0, policy_version 1170 (0.0021)
[2025-07-07 11:24:14,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 4796416. Throughput: 0: 1008.3. Samples: 1196862. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:24:14,119][04410] Avg episode reward: [(0, '28.029')]
[2025-07-07 11:24:19,117][04410] Fps is (10 sec: 3686.2, 60 sec: 3891.2, 300 sec: 3984.9). Total num frames: 4812800. Throughput: 0: 999.1. Samples: 1203280. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:24:19,121][04410] Avg episode reward: [(0, '26.353')]
[2025-07-07 11:24:23,483][04865] Updated weights for policy 0, policy_version 1180 (0.0027)
[2025-07-07 11:24:24,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 4833280. Throughput: 0: 1000.3. Samples: 1208486. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:24:24,123][04410] Avg episode reward: [(0, '24.729')]
[2025-07-07 11:24:29,116][04410] Fps is (10 sec: 4096.2, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 4853760. Throughput: 0: 1000.8. Samples: 1211688. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:24:29,120][04410] Avg episode reward: [(0, '22.708')]
[2025-07-07 11:24:34,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 4870144. Throughput: 0: 980.6. Samples: 1217534. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:24:34,120][04410] Avg episode reward: [(0, '23.491')]
[2025-07-07 11:24:34,187][04865] Updated weights for policy 0, policy_version 1190 (0.0014)
[2025-07-07 11:24:39,117][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 4894720. Throughput: 0: 991.2. Samples: 1223048. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:24:39,118][04410] Avg episode reward: [(0, '24.052')]
[2025-07-07 11:24:43,809][04865] Updated weights for policy 0, policy_version 1200 (0.0012)
[2025-07-07 11:24:44,117][04410] Fps is (10 sec: 4505.5, 60 sec: 3959.4, 300 sec: 3971.1). Total num frames: 4915200. Throughput: 0: 989.6. Samples: 1226322. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:24:44,127][04410] Avg episode reward: [(0, '23.857')]
[2025-07-07 11:24:49,116][04410] Fps is (10 sec: 3276.8, 60 sec: 3823.1, 300 sec: 3957.2). Total num frames: 4927488. Throughput: 0: 966.3. Samples: 1231878. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:24:49,118][04410] Avg episode reward: [(0, '24.539')]
[2025-07-07 11:24:54,116][04410] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 4952064. Throughput: 0: 985.6. Samples: 1237814. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:24:54,118][04410] Avg episode reward: [(0, '25.412')]
[2025-07-07 11:24:54,863][04865] Updated weights for policy 0, policy_version 1210 (0.0013)
[2025-07-07 11:24:59,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 4972544. Throughput: 0: 983.5. Samples: 1241118. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:24:59,118][04410] Avg episode reward: [(0, '26.155')]
[2025-07-07 11:25:04,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 4988928. Throughput: 0: 953.1. Samples: 1246168. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:25:04,122][04410] Avg episode reward: [(0, '26.506')]
[2025-07-07 11:25:05,569][04865] Updated weights for policy 0, policy_version 1220 (0.0013)
[2025-07-07 11:25:09,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 5009408. Throughput: 0: 982.8. Samples: 1252712. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:25:09,118][04410] Avg episode reward: [(0, '24.925')]
[2025-07-07 11:25:14,122][04410] Fps is (10 sec: 4093.8, 60 sec: 3890.9, 300 sec: 3957.1). Total num frames: 5029888. Throughput: 0: 987.0. Samples: 1256108. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:25:14,123][04410] Avg episode reward: [(0, '24.723')]
[2025-07-07 11:25:15,911][04865] Updated weights for policy 0, policy_version 1230 (0.0013)
[2025-07-07 11:25:19,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 5050368. Throughput: 0: 968.6. Samples: 1261122. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:25:19,121][04410] Avg episode reward: [(0, '24.505')]
[2025-07-07 11:25:19,129][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001233_5050368.pth...
[2025-07-07 11:25:19,233][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001001_4100096.pth
[2025-07-07 11:25:24,117][04410] Fps is (10 sec: 4098.1, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 5070848. Throughput: 0: 995.9. Samples: 1267862. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:25:24,122][04410] Avg episode reward: [(0, '23.796')]
[2025-07-07 11:25:25,358][04865] Updated weights for policy 0, policy_version 1240 (0.0015)
[2025-07-07 11:25:29,117][04410] Fps is (10 sec: 4095.8, 60 sec: 3959.4, 300 sec: 3971.0). Total num frames: 5091328. Throughput: 0: 996.2. Samples: 1271150. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:25:29,118][04410] Avg episode reward: [(0, '23.958')]
[2025-07-07 11:25:34,116][04410] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 5107712. Throughput: 0: 983.8. Samples: 1276148. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:25:34,118][04410] Avg episode reward: [(0, '25.712')]
[2025-07-07 11:25:36,031][04865] Updated weights for policy 0, policy_version 1250 (0.0014)
[2025-07-07 11:25:39,116][04410] Fps is (10 sec: 4096.3, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 5132288. Throughput: 0: 1001.4. Samples: 1282876. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:25:39,122][04410] Avg episode reward: [(0, '25.415')]
[2025-07-07 11:25:44,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 5148672. Throughput: 0: 997.1. Samples: 1285986. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:25:44,122][04410] Avg episode reward: [(0, '26.717')]
[2025-07-07 11:25:46,667][04865] Updated weights for policy 0, policy_version 1260 (0.0015)
[2025-07-07 11:25:49,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 5169152. Throughput: 0: 1003.1. Samples: 1291306. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:25:49,122][04410] Avg episode reward: [(0, '26.379')]
[2025-07-07 11:25:54,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 5193728. Throughput: 0: 1008.8. Samples: 1298110. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:25:54,122][04410] Avg episode reward: [(0, '25.299')]
[2025-07-07 11:25:56,191][04865] Updated weights for policy 0, policy_version 1270 (0.0014)
[2025-07-07 11:25:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 5206016. Throughput: 0: 995.1. Samples: 1300882. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:25:59,118][04410] Avg episode reward: [(0, '25.718')]
[2025-07-07 11:26:04,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 5230592. Throughput: 0: 1007.7. Samples: 1306468. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:26:04,119][04410] Avg episode reward: [(0, '24.276')]
[2025-07-07 11:26:06,318][04865] Updated weights for policy 0, policy_version 1280 (0.0012)
[2025-07-07 11:26:09,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 5251072. Throughput: 0: 1007.2. Samples: 1313186. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:26:09,118][04410] Avg episode reward: [(0, '24.521')]
[2025-07-07 11:26:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.8, 300 sec: 3957.2). Total num frames: 5267456. Throughput: 0: 986.1. Samples: 1315526. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:26:14,119][04410] Avg episode reward: [(0, '24.446')]
[2025-07-07 11:26:18,712][04865] Updated weights for policy 0, policy_version 1290 (0.0012)
[2025-07-07 11:26:19,116][04410] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 5283840. Throughput: 0: 970.8. Samples: 1319834. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:26:19,119][04410] Avg episode reward: [(0, '25.001')]
[2025-07-07 11:26:24,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 5304320. Throughput: 0: 968.4. Samples: 1326456. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:26:24,120][04410] Avg episode reward: [(0, '23.962')]
[2025-07-07 11:26:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3929.4). Total num frames: 5320704. Throughput: 0: 944.7. Samples: 1328496. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:26:29,122][04410] Avg episode reward: [(0, '22.582')]
[2025-07-07 11:26:29,327][04865] Updated weights for policy 0, policy_version 1300 (0.0012)
[2025-07-07 11:26:34,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 5345280. Throughput: 0: 970.6. Samples: 1334984. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:26:34,122][04410] Avg episode reward: [(0, '22.813')]
[2025-07-07 11:26:39,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3929.4). Total num frames: 5361664. Throughput: 0: 956.0. Samples: 1341128. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:26:39,120][04410] Avg episode reward: [(0, '21.823')]
[2025-07-07 11:26:39,306][04865] Updated weights for policy 0, policy_version 1310 (0.0014)
[2025-07-07 11:26:44,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 5382144. Throughput: 0: 947.9. Samples: 1343538. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:26:44,120][04410] Avg episode reward: [(0, '22.500')]
[2025-07-07 11:26:48,914][04865] Updated weights for policy 0, policy_version 1320 (0.0020)
[2025-07-07 11:26:49,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 5406720. Throughput: 0: 974.1. Samples: 1350302. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:26:49,118][04410] Avg episode reward: [(0, '23.382')]
[2025-07-07 11:26:54,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3943.3). Total num frames: 5423104. Throughput: 0: 951.6. Samples: 1356008. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:26:54,124][04410] Avg episode reward: [(0, '25.172')]
[2025-07-07 11:26:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 5443584. Throughput: 0: 960.9. Samples: 1358768. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:26:59,118][04410] Avg episode reward: [(0, '25.449')]
[2025-07-07 11:26:59,480][04865] Updated weights for policy 0, policy_version 1330 (0.0012)
[2025-07-07 11:27:04,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 5468160. Throughput: 0: 1013.2. Samples: 1365428. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:27:04,120][04410] Avg episode reward: [(0, '25.280')]
[2025-07-07 11:27:09,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3957.2). Total num frames: 5480448. Throughput: 0: 982.6. Samples: 1370674. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:27:09,122][04410] Avg episode reward: [(0, '24.690')]
[2025-07-07 11:27:10,107][04865] Updated weights for policy 0, policy_version 1340 (0.0015)
[2025-07-07 11:27:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 5505024. Throughput: 0: 1008.5. Samples: 1373878. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:27:14,118][04410] Avg episode reward: [(0, '25.535')]
[2025-07-07 11:27:19,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 5525504. Throughput: 0: 1014.7. Samples: 1380646. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:27:19,119][04410] Avg episode reward: [(0, '26.111')]
[2025-07-07 11:27:19,127][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001349_5525504.pth...
[2025-07-07 11:27:19,261][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001118_4579328.pth
[2025-07-07 11:27:19,668][04865] Updated weights for policy 0, policy_version 1350 (0.0015)
[2025-07-07 11:27:24,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 5541888. Throughput: 0: 989.4. Samples: 1385652. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:27:24,118][04410] Avg episode reward: [(0, '26.636')]
[2025-07-07 11:27:29,117][04410] Fps is (10 sec: 4095.9, 60 sec: 4096.0, 300 sec: 3971.0). Total num frames: 5566464. Throughput: 0: 1009.3. Samples: 1388958. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:27:29,118][04410] Avg episode reward: [(0, '26.993')]
[2025-07-07 11:27:30,007][04865] Updated weights for policy 0, policy_version 1360 (0.0021)
[2025-07-07 11:27:34,119][04410] Fps is (10 sec: 4095.0, 60 sec: 3959.3, 300 sec: 3943.2). Total num frames: 5582848. Throughput: 0: 1005.4. Samples: 1395546. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:27:34,125][04410] Avg episode reward: [(0, '27.653')]
[2025-07-07 11:27:39,116][04410] Fps is (10 sec: 3686.5, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 5603328. Throughput: 0: 989.5. Samples: 1400536. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:27:39,118][04410] Avg episode reward: [(0, '27.091')]
[2025-07-07 11:27:40,762][04865] Updated weights for policy 0, policy_version 1370 (0.0014)
[2025-07-07 11:27:44,116][04410] Fps is (10 sec: 4097.0, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 5623808. Throughput: 0: 1002.8. Samples: 1403894. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:27:44,118][04410] Avg episode reward: [(0, '27.525')]
[2025-07-07 11:27:49,120][04410] Fps is (10 sec: 4094.6, 60 sec: 3959.2, 300 sec: 3957.2). Total num frames: 5644288. Throughput: 0: 1002.1. Samples: 1410528. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:27:49,121][04410] Avg episode reward: [(0, '26.208')]
[2025-07-07 11:27:51,332][04865] Updated weights for policy 0, policy_version 1380 (0.0012)
[2025-07-07 11:27:54,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 5664768. Throughput: 0: 1001.2. Samples: 1415728. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:27:54,118][04410] Avg episode reward: [(0, '25.335')]
[2025-07-07 11:27:59,116][04410] Fps is (10 sec: 4097.4, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 5685248. Throughput: 0: 1005.4. Samples: 1419120. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:27:59,123][04410] Avg episode reward: [(0, '25.489')]
[2025-07-07 11:28:00,502][04865] Updated weights for policy 0, policy_version 1390 (0.0014)
[2025-07-07 11:28:04,118][04410] Fps is (10 sec: 3685.7, 60 sec: 3891.1, 300 sec: 3957.1). Total num frames: 5701632. Throughput: 0: 989.8. Samples: 1425190. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:28:04,120][04410] Avg episode reward: [(0, '25.390')]
[2025-07-07 11:28:09,117][04410] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 3957.1). Total num frames: 5722112. Throughput: 0: 1004.7. Samples: 1430864. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:28:09,118][04410] Avg episode reward: [(0, '24.774')]
[2025-07-07 11:28:11,063][04865] Updated weights for policy 0, policy_version 1400 (0.0022)
[2025-07-07 11:28:14,116][04410] Fps is (10 sec: 4506.4, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 5746688. Throughput: 0: 1006.1. Samples: 1434234. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:28:14,123][04410] Avg episode reward: [(0, '25.222')]
[2025-07-07 11:28:19,116][04410] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 5763072. Throughput: 0: 986.7. Samples: 1439946. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:28:19,118][04410] Avg episode reward: [(0, '26.272')]
[2025-07-07 11:28:21,771][04865] Updated weights for policy 0, policy_version 1410 (0.0015)
[2025-07-07 11:28:24,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 5783552. Throughput: 0: 1012.5. Samples: 1446100. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:28:24,118][04410] Avg episode reward: [(0, '25.761')]
[2025-07-07 11:28:29,121][04410] Fps is (10 sec: 4503.6, 60 sec: 4027.5, 300 sec: 3971.0). Total num frames: 5808128. Throughput: 0: 1012.4. Samples: 1449456. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:28:29,122][04410] Avg episode reward: [(0, '26.558')]
[2025-07-07 11:28:31,865][04865] Updated weights for policy 0, policy_version 1420 (0.0012)
[2025-07-07 11:28:34,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.9, 300 sec: 3971.0). Total num frames: 5824512. Throughput: 0: 982.0. Samples: 1454714. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:28:34,122][04410] Avg episode reward: [(0, '26.314')]
[2025-07-07 11:28:39,116][04410] Fps is (10 sec: 3688.0, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 5844992. Throughput: 0: 1012.7. Samples: 1461300. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:28:39,121][04410] Avg episode reward: [(0, '26.696')]
[2025-07-07 11:28:41,345][04865] Updated weights for policy 0, policy_version 1430 (0.0020)
[2025-07-07 11:28:44,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 5865472. Throughput: 0: 1012.7. Samples: 1464690. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:28:44,121][04410] Avg episode reward: [(0, '25.408')]
[2025-07-07 11:28:49,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.7, 300 sec: 3957.2). Total num frames: 5881856. Throughput: 0: 990.0. Samples: 1469740. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:28:49,118][04410] Avg episode reward: [(0, '26.377')]
[2025-07-07 11:28:51,910][04865] Updated weights for policy 0, policy_version 1440 (0.0020)
[2025-07-07 11:28:54,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 5906432. Throughput: 0: 1014.4. Samples: 1476512. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:28:54,122][04410] Avg episode reward: [(0, '25.362')]
[2025-07-07 11:28:59,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 5922816. Throughput: 0: 1015.6. Samples: 1479934. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:28:59,118][04410] Avg episode reward: [(0, '25.706')]
[2025-07-07 11:29:02,695][04865] Updated weights for policy 0, policy_version 1450 (0.0016)
[2025-07-07 11:29:04,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.9, 300 sec: 3957.2). Total num frames: 5943296. Throughput: 0: 999.1. Samples: 1484904. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:29:04,118][04410] Avg episode reward: [(0, '26.734')]
[2025-07-07 11:29:09,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3971.0). Total num frames: 5967872. Throughput: 0: 1013.3. Samples: 1491700. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:29:09,118][04410] Avg episode reward: [(0, '26.810')]
[2025-07-07 11:29:12,020][04865] Updated weights for policy 0, policy_version 1460 (0.0012)
[2025-07-07 11:29:14,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 5984256. Throughput: 0: 1008.9. Samples: 1494854. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:29:14,118][04410] Avg episode reward: [(0, '28.140')]
[2025-07-07 11:29:19,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 6004736. Throughput: 0: 1007.8. Samples: 1500066. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:29:19,121][04410] Avg episode reward: [(0, '26.195')]
[2025-07-07 11:29:19,128][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth...
[2025-07-07 11:29:19,243][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001233_5050368.pth
[2025-07-07 11:29:22,431][04865] Updated weights for policy 0, policy_version 1470 (0.0020)
[2025-07-07 11:29:24,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 6029312. Throughput: 0: 1010.0. Samples: 1506750. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:29:24,118][04410] Avg episode reward: [(0, '27.518')]
[2025-07-07 11:29:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.5, 300 sec: 3971.0). Total num frames: 6041600. Throughput: 0: 995.4. Samples: 1509484. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:29:29,121][04410] Avg episode reward: [(0, '27.023')]
[2025-07-07 11:29:32,920][04865] Updated weights for policy 0, policy_version 1480 (0.0017)
[2025-07-07 11:29:34,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 6066176. Throughput: 0: 1009.1. Samples: 1515148. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:29:34,121][04410] Avg episode reward: [(0, '28.155')]
[2025-07-07 11:29:39,117][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 6086656. Throughput: 0: 1009.0. Samples: 1521918. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:29:39,118][04410] Avg episode reward: [(0, '26.364')]
[2025-07-07 11:29:43,649][04865] Updated weights for policy 0, policy_version 1490 (0.0013)
[2025-07-07 11:29:44,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 6103040. Throughput: 0: 985.1. Samples: 1524264. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:29:44,118][04410] Avg episode reward: [(0, '26.221')]
[2025-07-07 11:29:49,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 6123520. Throughput: 0: 1008.5. Samples: 1530288. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:29:49,118][04410] Avg episode reward: [(0, '26.580')]
[2025-07-07 11:29:52,570][04865] Updated weights for policy 0, policy_version 1500 (0.0014)
[2025-07-07 11:29:54,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6148096. Throughput: 0: 1004.0. Samples: 1536878. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:29:54,118][04410] Avg episode reward: [(0, '26.815')]
[2025-07-07 11:29:59,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6164480. Throughput: 0: 981.5. Samples: 1539020. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2025-07-07 11:29:59,118][04410] Avg episode reward: [(0, '25.551')]
[2025-07-07 11:30:03,346][04865] Updated weights for policy 0, policy_version 1510 (0.0014)
[2025-07-07 11:30:04,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6184960. Throughput: 0: 1006.9. Samples: 1545378. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:30:04,118][04410] Avg episode reward: [(0, '25.389')]
[2025-07-07 11:30:09,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3985.0). Total num frames: 6205440. Throughput: 0: 996.0. Samples: 1551570. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:30:09,118][04410] Avg episode reward: [(0, '25.556')]
[2025-07-07 11:30:14,026][04865] Updated weights for policy 0, policy_version 1520 (0.0018)
[2025-07-07 11:30:14,117][04410] Fps is (10 sec: 4095.9, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6225920. Throughput: 0: 984.1. Samples: 1553770. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:30:14,118][04410] Avg episode reward: [(0, '25.680')]
[2025-07-07 11:30:19,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 6242304. Throughput: 0: 994.2. Samples: 1559888. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:30:19,120][04410] Avg episode reward: [(0, '25.901')]
[2025-07-07 11:30:24,117][04410] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3957.2). Total num frames: 6258688. Throughput: 0: 948.3. Samples: 1564594. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:30:24,118][04410] Avg episode reward: [(0, '25.175')]
[2025-07-07 11:30:26,242][04865] Updated weights for policy 0, policy_version 1530 (0.0021)
[2025-07-07 11:30:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 6279168. Throughput: 0: 955.2. Samples: 1567246. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:30:29,120][04410] Avg episode reward: [(0, '24.996')]
[2025-07-07 11:30:34,116][04410] Fps is (10 sec: 4096.2, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 6299648. Throughput: 0: 970.0. Samples: 1573938. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:30:34,118][04410] Avg episode reward: [(0, '26.059')]
[2025-07-07 11:30:35,712][04865] Updated weights for policy 0, policy_version 1540 (0.0013)
[2025-07-07 11:30:39,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3957.2). Total num frames: 6316032. Throughput: 0: 941.7. Samples: 1579254. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:30:39,121][04410] Avg episode reward: [(0, '25.106')]
[2025-07-07 11:30:44,120][04410] Fps is (10 sec: 3685.2, 60 sec: 3891.0, 300 sec: 3957.1). Total num frames: 6336512. Throughput: 0: 963.5. Samples: 1582382. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:30:44,121][04410] Avg episode reward: [(0, '25.939')]
[2025-07-07 11:30:46,039][04865] Updated weights for policy 0, policy_version 1550 (0.0012)
[2025-07-07 11:30:49,116][04410] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 6361088. Throughput: 0: 971.5. Samples: 1589096. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:30:49,118][04410] Avg episode reward: [(0, '25.199')]
[2025-07-07 11:30:54,116][04410] Fps is (10 sec: 4097.3, 60 sec: 3822.9, 300 sec: 3971.0). Total num frames: 6377472. Throughput: 0: 947.0. Samples: 1594184. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:30:54,118][04410] Avg episode reward: [(0, '26.472')]
[2025-07-07 11:30:56,502][04865] Updated weights for policy 0, policy_version 1560 (0.0020)
[2025-07-07 11:30:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 6397952. Throughput: 0: 973.7. Samples: 1597588. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:30:59,122][04410] Avg episode reward: [(0, '27.782')]
[2025-07-07 11:31:04,117][04410] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 6418432. Throughput: 0: 984.0. Samples: 1604168. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:31:04,121][04410] Avg episode reward: [(0, '27.391')]
[2025-07-07 11:31:07,190][04865] Updated weights for policy 0, policy_version 1570 (0.0012)
[2025-07-07 11:31:09,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 6438912. Throughput: 0: 993.0. Samples: 1609278. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:31:09,121][04410] Avg episode reward: [(0, '28.842')]
[2025-07-07 11:31:09,129][04851] Saving new best policy, reward=28.842!
[2025-07-07 11:31:14,116][04410] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3984.9). Total num frames: 6459392. Throughput: 0: 1008.0. Samples: 1612604. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:31:14,122][04410] Avg episode reward: [(0, '28.846')]
[2025-07-07 11:31:14,125][04851] Saving new best policy, reward=28.846!
[2025-07-07 11:31:16,414][04865] Updated weights for policy 0, policy_version 1580 (0.0013)
[2025-07-07 11:31:19,118][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 6475776. Throughput: 0: 1001.6. Samples: 1619012. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:31:19,120][04410] Avg episode reward: [(0, '28.837')]
[2025-07-07 11:31:19,127][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001581_6475776.pth...
[2025-07-07 11:31:19,275][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001349_5525504.pth
[2025-07-07 11:31:24,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 6496256. Throughput: 0: 1000.7. Samples: 1624284. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:31:24,118][04410] Avg episode reward: [(0, '29.040')]
[2025-07-07 11:31:24,122][04851] Saving new best policy, reward=29.040!
[2025-07-07 11:31:27,144][04865] Updated weights for policy 0, policy_version 1590 (0.0012)
[2025-07-07 11:31:29,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6520832. Throughput: 0: 1004.7. Samples: 1627590. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:31:29,122][04410] Avg episode reward: [(0, '27.923')]
[2025-07-07 11:31:34,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 6537216. Throughput: 0: 988.4. Samples: 1633576. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:31:34,128][04410] Avg episode reward: [(0, '28.887')]
[2025-07-07 11:31:37,759][04865] Updated weights for policy 0, policy_version 1600 (0.0019)
[2025-07-07 11:31:39,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6557696. Throughput: 0: 1005.7. Samples: 1639442. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:31:39,118][04410] Avg episode reward: [(0, '29.273')]
[2025-07-07 11:31:39,129][04851] Saving new best policy, reward=29.273!
[2025-07-07 11:31:44,119][04410] Fps is (10 sec: 4095.0, 60 sec: 4027.8, 300 sec: 3971.0). Total num frames: 6578176. Throughput: 0: 1003.2. Samples: 1642734. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:31:44,122][04410] Avg episode reward: [(0, '28.271')]
[2025-07-07 11:31:48,078][04865] Updated weights for policy 0, policy_version 1610 (0.0019)
[2025-07-07 11:31:49,119][04410] Fps is (10 sec: 3685.5, 60 sec: 3891.0, 300 sec: 3971.0). Total num frames: 6594560. Throughput: 0: 982.6. Samples: 1648388. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:31:49,120][04410] Avg episode reward: [(0, '29.530')]
[2025-07-07 11:31:49,133][04851] Saving new best policy, reward=29.530!
[2025-07-07 11:31:54,116][04410] Fps is (10 sec: 4097.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6619136. Throughput: 0: 1006.9. Samples: 1654590. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:31:54,118][04410] Avg episode reward: [(0, '30.046')]
[2025-07-07 11:31:54,125][04851] Saving new best policy, reward=30.046!
[2025-07-07 11:31:57,393][04865] Updated weights for policy 0, policy_version 1620 (0.0012)
[2025-07-07 11:31:59,116][04410] Fps is (10 sec: 4506.7, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 6639616. Throughput: 0: 1007.4. Samples: 1657938. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:31:59,118][04410] Avg episode reward: [(0, '29.768')]
[2025-07-07 11:32:04,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 6656000. Throughput: 0: 979.5. Samples: 1663088. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:32:04,118][04410] Avg episode reward: [(0, '28.715')]
[2025-07-07 11:32:08,080][04865] Updated weights for policy 0, policy_version 1630 (0.0016)
[2025-07-07 11:32:09,117][04410] Fps is (10 sec: 4095.9, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6680576. Throughput: 0: 1008.1. Samples: 1669650. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:32:09,118][04410] Avg episode reward: [(0, '28.527')]
[2025-07-07 11:32:14,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6701056. Throughput: 0: 1010.0. Samples: 1673040. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:32:14,122][04410] Avg episode reward: [(0, '27.061')]
[2025-07-07 11:32:18,836][04865] Updated weights for policy 0, policy_version 1640 (0.0015)
[2025-07-07 11:32:19,116][04410] Fps is (10 sec: 3686.5, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6717440. Throughput: 0: 988.5. Samples: 1678060. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:32:19,123][04410] Avg episode reward: [(0, '27.418')]
[2025-07-07 11:32:24,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 6737920. Throughput: 0: 1008.8. Samples: 1684840. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:32:24,118][04410] Avg episode reward: [(0, '25.159')]
[2025-07-07 11:32:28,699][04865] Updated weights for policy 0, policy_version 1650 (0.0017)
[2025-07-07 11:32:29,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3985.0). Total num frames: 6758400. Throughput: 0: 1010.9. Samples: 1688224. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:32:29,118][04410] Avg episode reward: [(0, '25.869')]
[2025-07-07 11:32:34,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6778880. Throughput: 0: 996.6. Samples: 1693234. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:32:34,118][04410] Avg episode reward: [(0, '26.506')]
[2025-07-07 11:32:38,550][04865] Updated weights for policy 0, policy_version 1660 (0.0016)
[2025-07-07 11:32:39,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6799360. Throughput: 0: 1010.0. Samples: 1700038. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:32:39,121][04410] Avg episode reward: [(0, '28.299')]
[2025-07-07 11:32:44,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3971.1). Total num frames: 6815744. Throughput: 0: 1006.4. Samples: 1703224. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:32:44,118][04410] Avg episode reward: [(0, '28.100')]
[2025-07-07 11:32:48,949][04865] Updated weights for policy 0, policy_version 1670 (0.0016)
[2025-07-07 11:32:49,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4096.2, 300 sec: 3984.9). Total num frames: 6840320. Throughput: 0: 1009.5. Samples: 1708514. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:32:49,118][04410] Avg episode reward: [(0, '28.200')]
[2025-07-07 11:32:54,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6860800. Throughput: 0: 1013.8. Samples: 1715272. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:32:54,117][04410] Avg episode reward: [(0, '27.822')]
[2025-07-07 11:32:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 6877184. Throughput: 0: 1000.5. Samples: 1718062. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:32:59,120][04410] Avg episode reward: [(0, '27.782')]
[2025-07-07 11:32:59,615][04865] Updated weights for policy 0, policy_version 1680 (0.0015)
[2025-07-07 11:33:04,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6897664. Throughput: 0: 1012.2. Samples: 1723608. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:33:04,119][04410] Avg episode reward: [(0, '26.973')]
[2025-07-07 11:33:08,779][04865] Updated weights for policy 0, policy_version 1690 (0.0016)
[2025-07-07 11:33:09,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6922240. Throughput: 0: 1010.3. Samples: 1730304. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:33:09,118][04410] Avg episode reward: [(0, '26.367')]
[2025-07-07 11:33:14,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 6938624. Throughput: 0: 987.2. Samples: 1732650. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:33:14,122][04410] Avg episode reward: [(0, '27.778')]
[2025-07-07 11:33:19,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 6959104. Throughput: 0: 1010.4. Samples: 1738700. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:33:19,122][04410] Avg episode reward: [(0, '27.979')]
[2025-07-07 11:33:19,129][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001699_6959104.pth...
[2025-07-07 11:33:19,133][04410] No heartbeat for components: RolloutWorker_w1 (1774 seconds), RolloutWorker_w4 (1774 seconds), RolloutWorker_w6 (1774 seconds)
[2025-07-07 11:33:19,219][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth
[2025-07-07 11:33:19,627][04865] Updated weights for policy 0, policy_version 1700 (0.0012)
[2025-07-07 11:33:24,118][04410] Fps is (10 sec: 4095.4, 60 sec: 4027.6, 300 sec: 3971.1). Total num frames: 6979584. Throughput: 0: 1003.2. Samples: 1745182. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:33:24,119][04410] Avg episode reward: [(0, '29.377')]
[2025-07-07 11:33:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 6995968. Throughput: 0: 977.8. Samples: 1747224. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:33:29,121][04410] Avg episode reward: [(0, '30.687')]
[2025-07-07 11:33:29,200][04851] Saving new best policy, reward=30.687!
[2025-07-07 11:33:30,273][04865] Updated weights for policy 0, policy_version 1710 (0.0015)
[2025-07-07 11:33:34,116][04410] Fps is (10 sec: 4096.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7020544. Throughput: 0: 1004.0. Samples: 1753694. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:33:34,122][04410] Avg episode reward: [(0, '29.457')]
[2025-07-07 11:33:39,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 7036928. Throughput: 0: 988.5. Samples: 1759754. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:33:39,124][04410] Avg episode reward: [(0, '30.568')]
[2025-07-07 11:33:40,907][04865] Updated weights for policy 0, policy_version 1720 (0.0013)
[2025-07-07 11:33:44,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7057408. Throughput: 0: 980.0. Samples: 1762160. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:33:44,118][04410] Avg episode reward: [(0, '31.342')]
[2025-07-07 11:33:44,119][04851] Saving new best policy, reward=31.342!
[2025-07-07 11:33:49,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7081984. Throughput: 0: 1003.9. Samples: 1768782. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:33:49,121][04410] Avg episode reward: [(0, '31.539')]
[2025-07-07 11:33:49,134][04851] Saving new best policy, reward=31.539!
[2025-07-07 11:33:50,136][04865] Updated weights for policy 0, policy_version 1730 (0.0013)
[2025-07-07 11:33:54,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 7098368. Throughput: 0: 980.4. Samples: 1774422. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:33:54,127][04410] Avg episode reward: [(0, '31.393')]
[2025-07-07 11:33:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7118848. Throughput: 0: 990.7. Samples: 1777230. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:33:59,122][04410] Avg episode reward: [(0, '32.443')]
[2025-07-07 11:33:59,133][04851] Saving new best policy, reward=32.443!
[2025-07-07 11:34:00,735][04865] Updated weights for policy 0, policy_version 1740 (0.0015)
[2025-07-07 11:34:04,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 7139328. Throughput: 0: 1002.7. Samples: 1783822. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:34:04,118][04410] Avg episode reward: [(0, '31.883')]
[2025-07-07 11:34:09,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 7155712. Throughput: 0: 973.4. Samples: 1788982. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:34:09,118][04410] Avg episode reward: [(0, '32.560')]
[2025-07-07 11:34:09,128][04851] Saving new best policy, reward=32.560!
[2025-07-07 11:34:11,496][04865] Updated weights for policy 0, policy_version 1750 (0.0012)
[2025-07-07 11:34:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 7176192. Throughput: 0: 998.8. Samples: 1792172. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:34:14,123][04410] Avg episode reward: [(0, '29.917')]
[2025-07-07 11:34:19,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 7196672. Throughput: 0: 999.3. Samples: 1798662. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:34:19,118][04410] Avg episode reward: [(0, '28.426')]
[2025-07-07 11:34:23,512][04865] Updated weights for policy 0, policy_version 1760 (0.0023)
[2025-07-07 11:34:24,116][04410] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3957.2). Total num frames: 7208960. Throughput: 0: 948.3. Samples: 1802428. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:34:24,118][04410] Avg episode reward: [(0, '27.116')]
[2025-07-07 11:34:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 7233536. Throughput: 0: 966.0. Samples: 1805632. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:34:29,120][04410] Avg episode reward: [(0, '26.218')]
[2025-07-07 11:34:32,732][04865] Updated weights for policy 0, policy_version 1770 (0.0013)
[2025-07-07 11:34:34,117][04410] Fps is (10 sec: 4505.4, 60 sec: 3891.2, 300 sec: 3957.1). Total num frames: 7254016. Throughput: 0: 969.0. Samples: 1812388. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:34:34,119][04410] Avg episode reward: [(0, '25.891')]
[2025-07-07 11:34:39,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 7270400. Throughput: 0: 956.7. Samples: 1817474. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:34:39,122][04410] Avg episode reward: [(0, '26.116')]
[2025-07-07 11:34:43,362][04865] Updated weights for policy 0, policy_version 1780 (0.0013)
[2025-07-07 11:34:44,116][04410] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 7290880. Throughput: 0: 970.0. Samples: 1820880. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:34:44,122][04410] Avg episode reward: [(0, '27.012')]
[2025-07-07 11:34:49,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3943.3). Total num frames: 7311360. Throughput: 0: 973.6. Samples: 1827634. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:34:49,118][04410] Avg episode reward: [(0, '27.472')]
[2025-07-07 11:34:53,851][04865] Updated weights for policy 0, policy_version 1790 (0.0019)
[2025-07-07 11:34:54,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 7331840. Throughput: 0: 972.7. Samples: 1832754. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:34:54,118][04410] Avg episode reward: [(0, '28.551')]
[2025-07-07 11:34:59,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 7352320. Throughput: 0: 975.8. Samples: 1836084. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:34:59,119][04410] Avg episode reward: [(0, '27.963')]
[2025-07-07 11:35:03,829][04865] Updated weights for policy 0, policy_version 1800 (0.0021)
[2025-07-07 11:35:04,118][04410] Fps is (10 sec: 4095.3, 60 sec: 3891.1, 300 sec: 3957.1). Total num frames: 7372800. Throughput: 0: 974.3. Samples: 1842506. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:35:04,119][04410] Avg episode reward: [(0, '27.782')]
[2025-07-07 11:35:09,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 7393280. Throughput: 0: 1008.6. Samples: 1847814. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:35:09,118][04410] Avg episode reward: [(0, '27.831')]
[2025-07-07 11:35:13,621][04865] Updated weights for policy 0, policy_version 1810 (0.0016)
[2025-07-07 11:35:14,116][04410] Fps is (10 sec: 4096.7, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 7413760. Throughput: 0: 1013.4. Samples: 1851234. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:35:14,118][04410] Avg episode reward: [(0, '29.508')]
[2025-07-07 11:35:19,117][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 7430144. Throughput: 0: 997.9. Samples: 1857292. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:35:19,119][04410] Avg episode reward: [(0, '29.069')]
[2025-07-07 11:35:19,126][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001814_7430144.pth...
[2025-07-07 11:35:19,224][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001581_6475776.pth
[2025-07-07 11:35:24,117][04410] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 7450624. Throughput: 0: 1010.7. Samples: 1862956. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:35:24,123][04410] Avg episode reward: [(0, '29.085')]
[2025-07-07 11:35:24,250][04865] Updated weights for policy 0, policy_version 1820 (0.0013)
[2025-07-07 11:35:29,117][04410] Fps is (10 sec: 4505.5, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7475200. Throughput: 0: 1010.5. Samples: 1866354. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:35:29,118][04410] Avg episode reward: [(0, '29.590')]
[2025-07-07 11:35:34,117][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 7491584. Throughput: 0: 986.4. Samples: 1872024. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:35:34,120][04410] Avg episode reward: [(0, '29.859')]
[2025-07-07 11:35:34,911][04865] Updated weights for policy 0, policy_version 1830 (0.0015)
[2025-07-07 11:35:39,116][04410] Fps is (10 sec: 3686.5, 60 sec: 4027.7, 300 sec: 3985.0). Total num frames: 7512064. Throughput: 0: 1010.1. Samples: 1878208. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:35:39,119][04410] Avg episode reward: [(0, '27.662')]
[2025-07-07 11:35:43,946][04865] Updated weights for policy 0, policy_version 1840 (0.0012)
[2025-07-07 11:35:44,119][04410] Fps is (10 sec: 4504.8, 60 sec: 4095.9, 300 sec: 3984.9). Total num frames: 7536640. Throughput: 0: 1009.7. Samples: 1881522. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:35:44,120][04410] Avg episode reward: [(0, '27.710')]
[2025-07-07 11:35:49,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7553024. Throughput: 0: 982.7. Samples: 1886724. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:35:49,118][04410] Avg episode reward: [(0, '27.681')]
[2025-07-07 11:35:54,116][04410] Fps is (10 sec: 3687.2, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7573504. Throughput: 0: 1012.1. Samples: 1893358. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:35:54,118][04410] Avg episode reward: [(0, '27.392')]
[2025-07-07 11:35:54,582][04865] Updated weights for policy 0, policy_version 1850 (0.0014)
[2025-07-07 11:35:59,118][04410] Fps is (10 sec: 4095.4, 60 sec: 4027.6, 300 sec: 3984.9). Total num frames: 7593984. Throughput: 0: 1011.2. Samples: 1896738. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:35:59,119][04410] Avg episode reward: [(0, '27.707')]
[2025-07-07 11:36:04,117][04410] Fps is (10 sec: 3686.3, 60 sec: 3959.6, 300 sec: 3971.0). Total num frames: 7610368. Throughput: 0: 988.4. Samples: 1901772. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:36:04,124][04410] Avg episode reward: [(0, '28.266')]
[2025-07-07 11:36:05,269][04865] Updated weights for policy 0, policy_version 1860 (0.0020)
[2025-07-07 11:36:09,116][04410] Fps is (10 sec: 4096.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7634944. Throughput: 0: 1012.7. Samples: 1908526. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:36:09,118][04410] Avg episode reward: [(0, '27.608')]
[2025-07-07 11:36:14,116][04410] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 7651328. Throughput: 0: 1012.5. Samples: 1911918. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:36:14,118][04410] Avg episode reward: [(0, '27.690')]
[2025-07-07 11:36:15,896][04865] Updated weights for policy 0, policy_version 1870 (0.0012)
[2025-07-07 11:36:19,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7671808. Throughput: 0: 999.0. Samples: 1916980. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:36:19,122][04410] Avg episode reward: [(0, '28.919')]
[2025-07-07 11:36:24,117][04410] Fps is (10 sec: 4505.5, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 7696384. Throughput: 0: 1012.4. Samples: 1923766. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:36:24,118][04410] Avg episode reward: [(0, '29.171')]
[2025-07-07 11:36:24,814][04865] Updated weights for policy 0, policy_version 1880 (0.0016)
[2025-07-07 11:36:29,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 7712768. Throughput: 0: 1009.4. Samples: 1926942. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:36:29,118][04410] Avg episode reward: [(0, '28.670')]
[2025-07-07 11:36:34,116][04410] Fps is (10 sec: 3686.5, 60 sec: 4027.8, 300 sec: 3984.9). Total num frames: 7733248. Throughput: 0: 1011.8. Samples: 1932256. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:36:34,118][04410] Avg episode reward: [(0, '29.088')]
[2025-07-07 11:36:35,548][04865] Updated weights for policy 0, policy_version 1890 (0.0012)
[2025-07-07 11:36:39,117][04410] Fps is (10 sec: 4505.5, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 7757824. Throughput: 0: 1013.2. Samples: 1938952. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:36:39,118][04410] Avg episode reward: [(0, '28.660')]
[2025-07-07 11:36:44,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3985.0). Total num frames: 7770112. Throughput: 0: 999.5. Samples: 1941716. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:36:44,118][04410] Avg episode reward: [(0, '27.277')]
[2025-07-07 11:36:46,004][04865] Updated weights for policy 0, policy_version 1900 (0.0017)
[2025-07-07 11:36:49,116][04410] Fps is (10 sec: 3686.5, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7794688. Throughput: 0: 1014.8. Samples: 1947438. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:36:49,118][04410] Avg episode reward: [(0, '27.814')]
[2025-07-07 11:36:54,118][04410] Fps is (10 sec: 4505.0, 60 sec: 4027.6, 300 sec: 3984.9). Total num frames: 7815168. Throughput: 0: 1015.1. Samples: 1954208. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:36:54,124][04410] Avg episode reward: [(0, '27.494')]
[2025-07-07 11:36:56,046][04865] Updated weights for policy 0, policy_version 1910 (0.0012)
[2025-07-07 11:36:59,116][04410] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3984.9). Total num frames: 7831552. Throughput: 0: 989.2. Samples: 1956432. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:36:59,118][04410] Avg episode reward: [(0, '27.025')]
[2025-07-07 11:37:04,116][04410] Fps is (10 sec: 4096.6, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 7856128. Throughput: 0: 1014.7. Samples: 1962640. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:37:04,118][04410] Avg episode reward: [(0, '26.202')]
[2025-07-07 11:37:05,842][04865] Updated weights for policy 0, policy_version 1920 (0.0012)
[2025-07-07 11:37:09,116][04410] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7876608. Throughput: 0: 1006.6. Samples: 1969062. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:37:09,123][04410] Avg episode reward: [(0, '28.079')]
[2025-07-07 11:37:14,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7892992. Throughput: 0: 982.7. Samples: 1971164. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:37:14,118][04410] Avg episode reward: [(0, '27.689')]
[2025-07-07 11:37:16,396][04865] Updated weights for policy 0, policy_version 1930 (0.0012)
[2025-07-07 11:37:19,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 7917568. Throughput: 0: 1012.5. Samples: 1977820. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-07-07 11:37:19,123][04410] Avg episode reward: [(0, '26.715')]
[2025-07-07 11:37:19,133][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001933_7917568.pth...
[2025-07-07 11:37:19,241][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001699_6959104.pth
[2025-07-07 11:37:24,116][04410] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 7933952. Throughput: 0: 996.0. Samples: 1983774. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:37:24,121][04410] Avg episode reward: [(0, '27.364')]
[2025-07-07 11:37:27,013][04865] Updated weights for policy 0, policy_version 1940 (0.0018)
[2025-07-07 11:37:29,116][04410] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7954432. Throughput: 0: 988.3. Samples: 1986190. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-07-07 11:37:29,121][04410] Avg episode reward: [(0, '27.773')]
[2025-07-07 11:37:34,116][04410] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 7974912. Throughput: 0: 1012.4. Samples: 1992996. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-07-07 11:37:34,122][04410] Avg episode reward: [(0, '29.544')]
[2025-07-07 11:37:36,518][04865] Updated weights for policy 0, policy_version 1950 (0.0016)
[2025-07-07 11:37:39,120][04410] Fps is (10 sec: 3685.1, 60 sec: 3891.0, 300 sec: 3984.9). Total num frames: 7991296. Throughput: 0: 983.1. Samples: 1998448. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-07-07 11:37:39,126][04410] Avg episode reward: [(0, '28.139')]
[2025-07-07 11:37:42,143][04851] Stopping Batcher_0...
[2025-07-07 11:37:42,144][04851] Loop batcher_evt_loop terminating...
[2025-07-07 11:37:42,145][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001955_8007680.pth...
[2025-07-07 11:37:42,145][04410] Component Batcher_0 stopped!
[2025-07-07 11:37:42,146][04410] Component RolloutWorker_w1 process died already! Don't wait for it.
[2025-07-07 11:37:42,147][04410] Component RolloutWorker_w4 process died already! Don't wait for it.
[2025-07-07 11:37:42,148][04410] Component RolloutWorker_w6 process died already! Don't wait for it.
[2025-07-07 11:37:42,196][04865] Weights refcount: 2 0
[2025-07-07 11:37:42,206][04410] Component InferenceWorker_p0-w0 stopped!
[2025-07-07 11:37:42,207][04865] Stopping InferenceWorker_p0-w0...
[2025-07-07 11:37:42,207][04865] Loop inference_proc0-0_evt_loop terminating...
[2025-07-07 11:37:42,264][04851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001814_7430144.pth
[2025-07-07 11:37:42,276][04851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001955_8007680.pth...
[2025-07-07 11:37:42,412][04410] Component LearnerWorker_p0 stopped!
[2025-07-07 11:37:42,413][04851] Stopping LearnerWorker_p0...
[2025-07-07 11:37:42,414][04851] Loop learner_proc0_evt_loop terminating...
[2025-07-07 11:37:42,529][04410] Component RolloutWorker_w7 stopped!
[2025-07-07 11:37:42,537][04866] Stopping RolloutWorker_w3...
[2025-07-07 11:37:42,537][04410] Component RolloutWorker_w3 stopped!
[2025-07-07 11:37:42,529][04871] Stopping RolloutWorker_w7...
[2025-07-07 11:37:42,548][04866] Loop rollout_proc3_evt_loop terminating...
[2025-07-07 11:37:42,547][04871] Loop rollout_proc7_evt_loop terminating...
[2025-07-07 11:37:42,572][04410] Component RolloutWorker_w0 stopped!
[2025-07-07 11:37:42,573][04872] Stopping RolloutWorker_w0...
[2025-07-07 11:37:42,579][04872] Loop rollout_proc0_evt_loop terminating...
[2025-07-07 11:37:42,588][04868] Stopping RolloutWorker_w5...
[2025-07-07 11:37:42,588][04410] Component RolloutWorker_w5 stopped!
[2025-07-07 11:37:42,598][04410] Component RolloutWorker_w2 stopped!
[2025-07-07 11:37:42,599][04410] Waiting for process learner_proc0 to stop...
[2025-07-07 11:37:42,597][04864] Stopping RolloutWorker_w2...
[2025-07-07 11:37:42,601][04864] Loop rollout_proc2_evt_loop terminating...
[2025-07-07 11:37:42,598][04868] Loop rollout_proc5_evt_loop terminating...
[2025-07-07 11:37:43,844][04410] Waiting for process inference_proc0-0 to join...
[2025-07-07 11:37:43,845][04410] Waiting for process rollout_proc0 to join...
[2025-07-07 11:37:44,522][04410] Waiting for process rollout_proc1 to join...
[2025-07-07 11:37:44,523][04410] Waiting for process rollout_proc2 to join...
[2025-07-07 11:37:44,524][04410] Waiting for process rollout_proc3 to join...
[2025-07-07 11:37:45,193][04410] Waiting for process rollout_proc4 to join...
[2025-07-07 11:37:45,194][04410] Waiting for process rollout_proc5 to join...
[2025-07-07 11:37:45,195][04410] Waiting for process rollout_proc6 to join...
[2025-07-07 11:37:45,196][04410] Waiting for process rollout_proc7 to join...
[2025-07-07 11:37:45,197][04410] Batcher 0 profile tree view:
batching: 44.8028, releasing_batches: 0.0456
[2025-07-07 11:37:45,198][04410] InferenceWorker_p0-w0 profile tree view:
wait_policy: 0.0037
wait_policy_total: 787.5883
update_model: 17.6278
weight_update: 0.0022
one_step: 0.0029
handle_policy_step: 1161.0064
deserialize: 27.6890, stack: 6.9494, obs_to_device_normalize: 260.2275, forward: 609.0297, send_messages: 43.2567
prepare_outputs: 164.3805
to_cpu: 103.5949
[2025-07-07 11:37:45,199][04410] Learner 0 profile tree view:
misc: 0.0072, prepare_batch: 22.4008
train: 132.3602
epoch_init: 0.0169, minibatch_init: 0.0154, losses_postprocess: 1.2013, kl_divergence: 1.1626, after_optimizer: 63.7433
calculate_losses: 45.0292
losses_init: 0.0073, forward_head: 2.2134, bptt_initial: 30.6802, tail: 1.7938, advantages_returns: 0.4689, losses: 5.9394
bptt: 3.5081
bptt_forward_core: 3.3632
update: 20.2696
clip: 1.7873
[2025-07-07 11:37:45,200][04410] RolloutWorker_w0 profile tree view:
wait_for_trajectories: 0.6073, enqueue_policy_requests: 340.6194, env_step: 1493.4473, overhead: 27.4375, complete_rollouts: 11.2249
save_policy_outputs: 38.8964
split_output_tensors: 14.9490
[2025-07-07 11:37:45,200][04410] RolloutWorker_w7 profile tree view:
wait_for_trajectories: 0.6419, enqueue_policy_requests: 205.4864, env_step: 1589.4323, overhead: 30.9807, complete_rollouts: 15.1595
save_policy_outputs: 46.2018
split_output_tensors: 17.8548
[2025-07-07 11:37:45,203][04410] Loop Runner_EvtLoop terminating...
[2025-07-07 11:37:45,205][04410] Runner profile tree view:
main_loop: 2060.5797
[2025-07-07 11:37:45,206][04410] Collected {0: 8007680}, FPS: 3886.1
[2025-07-07 11:37:56,799][04410] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2025-07-07 11:37:56,799][04410] Overriding arg 'num_workers' with value 1 passed from command line
[2025-07-07 11:37:56,802][04410] Adding new argument 'no_render'=True that is not in the saved config file!
[2025-07-07 11:37:56,802][04410] Adding new argument 'save_video'=True that is not in the saved config file!
[2025-07-07 11:37:56,803][04410] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2025-07-07 11:37:56,804][04410] Adding new argument 'video_name'=None that is not in the saved config file!
[2025-07-07 11:37:56,805][04410] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2025-07-07 11:37:56,806][04410] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2025-07-07 11:37:56,807][04410] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2025-07-07 11:37:56,808][04410] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2025-07-07 11:37:56,809][04410] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2025-07-07 11:37:56,809][04410] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2025-07-07 11:37:56,810][04410] Adding new argument 'train_script'=None that is not in the saved config file!
[2025-07-07 11:37:56,811][04410] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2025-07-07 11:37:56,812][04410] Using frameskip 1 and render_action_repeat=4 for evaluation
[2025-07-07 11:37:56,839][04410] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-07-07 11:37:56,842][04410] RunningMeanStd input shape: (3, 72, 128)
[2025-07-07 11:37:56,844][04410] RunningMeanStd input shape: (1,)
[2025-07-07 11:37:56,858][04410] ConvEncoder: input_channels=3
[2025-07-07 11:37:56,958][04410] Conv encoder output size: 512
[2025-07-07 11:37:56,958][04410] Policy head output size: 512
[2025-07-07 11:37:57,211][04410] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001955_8007680.pth...
[2025-07-07 11:37:57,989][04410] Num frames 100...
[2025-07-07 11:37:58,115][04410] Num frames 200...
[2025-07-07 11:37:58,241][04410] Num frames 300...
[2025-07-07 11:37:58,374][04410] Num frames 400...
[2025-07-07 11:37:58,501][04410] Num frames 500...
[2025-07-07 11:37:58,627][04410] Num frames 600...
[2025-07-07 11:37:58,752][04410] Num frames 700...
[2025-07-07 11:37:58,813][04410] Avg episode rewards: #0: 15.040, true rewards: #0: 7.040
[2025-07-07 11:37:58,814][04410] Avg episode reward: 15.040, avg true_objective: 7.040
[2025-07-07 11:37:58,941][04410] Num frames 800...
[2025-07-07 11:37:59,074][04410] Num frames 900...
[2025-07-07 11:37:59,199][04410] Num frames 1000...
[2025-07-07 11:37:59,327][04410] Num frames 1100...
[2025-07-07 11:37:59,454][04410] Num frames 1200...
[2025-07-07 11:37:59,579][04410] Num frames 1300...
[2025-07-07 11:37:59,703][04410] Num frames 1400...
[2025-07-07 11:37:59,830][04410] Num frames 1500...
[2025-07-07 11:38:00,000][04410] Avg episode rewards: #0: 17.905, true rewards: #0: 7.905
[2025-07-07 11:38:00,001][04410] Avg episode reward: 17.905, avg true_objective: 7.905
[2025-07-07 11:38:00,026][04410] Num frames 1600...
[2025-07-07 11:38:00,151][04410] Num frames 1700...
[2025-07-07 11:38:00,285][04410] Num frames 1800...
[2025-07-07 11:38:00,417][04410] Num frames 1900...
[2025-07-07 11:38:00,548][04410] Num frames 2000...
[2025-07-07 11:38:00,679][04410] Num frames 2100...
[2025-07-07 11:38:00,819][04410] Num frames 2200...
[2025-07-07 11:38:01,003][04410] Num frames 2300...
[2025-07-07 11:38:01,179][04410] Num frames 2400...
[2025-07-07 11:38:01,351][04410] Num frames 2500...
[2025-07-07 11:38:01,528][04410] Num frames 2600...
[2025-07-07 11:38:01,722][04410] Num frames 2700...
[2025-07-07 11:38:01,893][04410] Num frames 2800...
[2025-07-07 11:38:02,076][04410] Num frames 2900...
[2025-07-07 11:38:02,246][04410] Num frames 3000...
[2025-07-07 11:38:02,425][04410] Num frames 3100...
[2025-07-07 11:38:02,608][04410] Num frames 3200...
[2025-07-07 11:38:02,790][04410] Num frames 3300...
[2025-07-07 11:38:02,976][04410] Num frames 3400...
[2025-07-07 11:38:03,196][04410] Num frames 3500...
[2025-07-07 11:38:03,345][04410] Num frames 3600...
[2025-07-07 11:38:03,510][04410] Avg episode rewards: #0: 28.603, true rewards: #0: 12.270
[2025-07-07 11:38:03,511][04410] Avg episode reward: 28.603, avg true_objective: 12.270
[2025-07-07 11:38:03,537][04410] Num frames 3700...
[2025-07-07 11:38:03,666][04410] Num frames 3800...
[2025-07-07 11:38:03,800][04410] Num frames 3900...
[2025-07-07 11:38:03,932][04410] Num frames 4000...
[2025-07-07 11:38:04,070][04410] Num frames 4100...
[2025-07-07 11:38:04,203][04410] Num frames 4200...
[2025-07-07 11:38:04,337][04410] Num frames 4300...
[2025-07-07 11:38:04,471][04410] Num frames 4400...
[2025-07-07 11:38:04,609][04410] Num frames 4500...
[2025-07-07 11:38:04,740][04410] Num frames 4600...
[2025-07-07 11:38:04,851][04410] Avg episode rewards: #0: 26.852, true rewards: #0: 11.603
[2025-07-07 11:38:04,852][04410] Avg episode reward: 26.852, avg true_objective: 11.603
[2025-07-07 11:38:04,929][04410] Num frames 4700...
[2025-07-07 11:38:05,063][04410] Num frames 4800...
[2025-07-07 11:38:05,203][04410] Num frames 4900...
[2025-07-07 11:38:05,334][04410] Num frames 5000...
[2025-07-07 11:38:05,465][04410] Num frames 5100...
[2025-07-07 11:38:05,595][04410] Num frames 5200...
[2025-07-07 11:38:05,726][04410] Num frames 5300...
[2025-07-07 11:38:05,857][04410] Num frames 5400...
[2025-07-07 11:38:05,992][04410] Num frames 5500...
[2025-07-07 11:38:06,124][04410] Num frames 5600...
[2025-07-07 11:38:06,186][04410] Avg episode rewards: #0: 25.202, true rewards: #0: 11.202
[2025-07-07 11:38:06,187][04410] Avg episode reward: 25.202, avg true_objective: 11.202
[2025-07-07 11:38:06,315][04410] Num frames 5700...
[2025-07-07 11:38:06,447][04410] Num frames 5800...
[2025-07-07 11:38:06,577][04410] Num frames 5900...
[2025-07-07 11:38:06,706][04410] Num frames 6000...
[2025-07-07 11:38:06,846][04410] Num frames 6100...
[2025-07-07 11:38:06,974][04410] Num frames 6200...
[2025-07-07 11:38:07,103][04410] Num frames 6300...
[2025-07-07 11:38:07,239][04410] Num frames 6400...
[2025-07-07 11:38:07,373][04410] Num frames 6500...
[2025-07-07 11:38:07,506][04410] Num frames 6600...
[2025-07-07 11:38:07,636][04410] Num frames 6700...
[2025-07-07 11:38:07,768][04410] Num frames 6800...
[2025-07-07 11:38:07,945][04410] Avg episode rewards: #0: 26.157, true rewards: #0: 11.490
[2025-07-07 11:38:07,946][04410] Avg episode reward: 26.157, avg true_objective: 11.490
[2025-07-07 11:38:07,956][04410] Num frames 6900...
[2025-07-07 11:38:08,086][04410] Num frames 7000...
[2025-07-07 11:38:08,227][04410] Num frames 7100...
[2025-07-07 11:38:08,358][04410] Num frames 7200...
[2025-07-07 11:38:08,489][04410] Num frames 7300...
[2025-07-07 11:38:08,618][04410] Num frames 7400...
[2025-07-07 11:38:08,752][04410] Num frames 7500...
[2025-07-07 11:38:08,888][04410] Avg episode rewards: #0: 24.803, true rewards: #0: 10.803
[2025-07-07 11:38:08,889][04410] Avg episode reward: 24.803, avg true_objective: 10.803
[2025-07-07 11:38:08,944][04410] Num frames 7600...
[2025-07-07 11:38:09,074][04410] Num frames 7700...
[2025-07-07 11:38:09,202][04410] Num frames 7800...
[2025-07-07 11:38:09,345][04410] Num frames 7900...
[2025-07-07 11:38:09,475][04410] Num frames 8000...
[2025-07-07 11:38:09,605][04410] Num frames 8100...
[2025-07-07 11:38:09,736][04410] Num frames 8200...
[2025-07-07 11:38:09,869][04410] Num frames 8300...
[2025-07-07 11:38:10,008][04410] Avg episode rewards: #0: 24.203, true rewards: #0: 10.452
[2025-07-07 11:38:10,009][04410] Avg episode reward: 24.203, avg true_objective: 10.452
[2025-07-07 11:38:10,063][04410] Num frames 8400...
[2025-07-07 11:38:10,194][04410] Num frames 8500...
[2025-07-07 11:38:10,339][04410] Num frames 8600...
[2025-07-07 11:38:10,482][04410] Num frames 8700...
[2025-07-07 11:38:10,557][04410] Avg episode rewards: #0: 21.904, true rewards: #0: 9.682
[2025-07-07 11:38:10,558][04410] Avg episode reward: 21.904, avg true_objective: 9.682
[2025-07-07 11:38:10,668][04410] Num frames 8800...
[2025-07-07 11:38:10,796][04410] Num frames 8900...
[2025-07-07 11:38:10,923][04410] Num frames 9000...
[2025-07-07 11:38:11,053][04410] Num frames 9100...
[2025-07-07 11:38:11,181][04410] Num frames 9200...
[2025-07-07 11:38:11,324][04410] Num frames 9300...
[2025-07-07 11:38:11,456][04410] Num frames 9400...
[2025-07-07 11:38:11,586][04410] Num frames 9500...
[2025-07-07 11:38:11,715][04410] Num frames 9600...
[2025-07-07 11:38:11,844][04410] Num frames 9700...
[2025-07-07 11:38:11,982][04410] Num frames 9800...
[2025-07-07 11:38:12,110][04410] Num frames 9900...
[2025-07-07 11:38:12,237][04410] Num frames 10000...
[2025-07-07 11:38:12,379][04410] Num frames 10100...
[2025-07-07 11:38:12,463][04410] Avg episode rewards: #0: 23.522, true rewards: #0: 10.122
[2025-07-07 11:38:12,464][04410] Avg episode reward: 23.522, avg true_objective: 10.122
[2025-07-07 11:39:10,180][04410] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2025-07-07 11:40:14,366][04410] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2025-07-07 11:40:14,368][04410] Overriding arg 'num_workers' with value 1 passed from command line
[2025-07-07 11:40:14,369][04410] Adding new argument 'no_render'=True that is not in the saved config file!
[2025-07-07 11:40:14,370][04410] Adding new argument 'save_video'=True that is not in the saved config file!
[2025-07-07 11:40:14,373][04410] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2025-07-07 11:40:14,374][04410] Adding new argument 'video_name'=None that is not in the saved config file!
[2025-07-07 11:40:14,376][04410] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2025-07-07 11:40:14,377][04410] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2025-07-07 11:40:14,378][04410] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2025-07-07 11:40:14,378][04410] Adding new argument 'hf_repository'='zhngq/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2025-07-07 11:40:14,379][04410] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2025-07-07 11:40:14,380][04410] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2025-07-07 11:40:14,381][04410] Adding new argument 'train_script'=None that is not in the saved config file!
[2025-07-07 11:40:14,381][04410] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2025-07-07 11:40:14,385][04410] Using frameskip 1 and render_action_repeat=4 for evaluation
[2025-07-07 11:40:14,439][04410] RunningMeanStd input shape: (3, 72, 128)
[2025-07-07 11:40:14,440][04410] RunningMeanStd input shape: (1,)
[2025-07-07 11:40:14,463][04410] ConvEncoder: input_channels=3
[2025-07-07 11:40:14,515][04410] Conv encoder output size: 512
[2025-07-07 11:40:14,516][04410] Policy head output size: 512
[2025-07-07 11:40:14,540][04410] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001955_8007680.pth...
[2025-07-07 11:40:15,163][04410] Num frames 100...
[2025-07-07 11:40:15,341][04410] Num frames 200...
[2025-07-07 11:40:15,520][04410] Num frames 300...
[2025-07-07 11:40:15,695][04410] Num frames 400...
[2025-07-07 11:40:15,849][04410] Num frames 500...
[2025-07-07 11:40:15,975][04410] Num frames 600...
[2025-07-07 11:40:16,120][04410] Avg episode rewards: #0: 11.720, true rewards: #0: 6.720
[2025-07-07 11:40:16,121][04410] Avg episode reward: 11.720, avg true_objective: 6.720
[2025-07-07 11:40:16,160][04410] Num frames 700...
[2025-07-07 11:40:16,287][04410] Num frames 800...
[2025-07-07 11:40:16,410][04410] Num frames 900...
[2025-07-07 11:40:16,548][04410] Num frames 1000...
[2025-07-07 11:40:16,679][04410] Num frames 1100...
[2025-07-07 11:40:16,806][04410] Num frames 1200...
[2025-07-07 11:40:16,933][04410] Num frames 1300...
[2025-07-07 11:40:17,070][04410] Num frames 1400...
[2025-07-07 11:40:17,200][04410] Num frames 1500...
[2025-07-07 11:40:17,332][04410] Num frames 1600...
[2025-07-07 11:40:17,458][04410] Num frames 1700...
[2025-07-07 11:40:17,596][04410] Num frames 1800...
[2025-07-07 11:40:17,723][04410] Num frames 1900...
[2025-07-07 11:40:17,848][04410] Num frames 2000...
[2025-07-07 11:40:17,979][04410] Num frames 2100...
[2025-07-07 11:40:18,105][04410] Num frames 2200...
[2025-07-07 11:40:18,235][04410] Num frames 2300...
[2025-07-07 11:40:18,380][04410] Avg episode rewards: #0: 29.840, true rewards: #0: 11.840
[2025-07-07 11:40:18,381][04410] Avg episode reward: 29.840, avg true_objective: 11.840
[2025-07-07 11:40:18,423][04410] Num frames 2400...
[2025-07-07 11:40:18,552][04410] Num frames 2500...
[2025-07-07 11:40:18,689][04410] Num frames 2600...
[2025-07-07 11:40:18,817][04410] Num frames 2700...
[2025-07-07 11:40:18,946][04410] Num frames 2800...
[2025-07-07 11:40:19,075][04410] Num frames 2900...
[2025-07-07 11:40:19,203][04410] Num frames 3000...
[2025-07-07 11:40:19,331][04410] Num frames 3100...
[2025-07-07 11:40:19,456][04410] Num frames 3200...
[2025-07-07 11:40:19,581][04410] Num frames 3300...
[2025-07-07 11:40:19,718][04410] Num frames 3400...
[2025-07-07 11:40:19,883][04410] Avg episode rewards: #0: 26.960, true rewards: #0: 11.627
[2025-07-07 11:40:19,884][04410] Avg episode reward: 26.960, avg true_objective: 11.627
[2025-07-07 11:40:19,901][04410] Num frames 3500...
[2025-07-07 11:40:20,032][04410] Num frames 3600...
[2025-07-07 11:40:20,164][04410] Num frames 3700...
[2025-07-07 11:40:20,295][04410] Num frames 3800...
[2025-07-07 11:40:20,428][04410] Num frames 3900...
[2025-07-07 11:40:20,555][04410] Num frames 4000...
[2025-07-07 11:40:20,693][04410] Num frames 4100...
[2025-07-07 11:40:20,820][04410] Num frames 4200...
[2025-07-07 11:40:20,950][04410] Num frames 4300...
[2025-07-07 11:40:21,088][04410] Num frames 4400...
[2025-07-07 11:40:21,218][04410] Num frames 4500...
[2025-07-07 11:40:21,347][04410] Num frames 4600...
[2025-07-07 11:40:21,413][04410] Avg episode rewards: #0: 26.520, true rewards: #0: 11.520
[2025-07-07 11:40:21,414][04410] Avg episode reward: 26.520, avg true_objective: 11.520
[2025-07-07 11:40:21,531][04410] Num frames 4700...
[2025-07-07 11:40:21,658][04410] Num frames 4800...
[2025-07-07 11:40:21,796][04410] Num frames 4900...
[2025-07-07 11:40:21,924][04410] Num frames 5000...
[2025-07-07 11:40:22,052][04410] Num frames 5100...
[2025-07-07 11:40:22,179][04410] Num frames 5200...
[2025-07-07 11:40:22,309][04410] Num frames 5300...
[2025-07-07 11:40:22,436][04410] Num frames 5400...
[2025-07-07 11:40:22,567][04410] Num frames 5500...
[2025-07-07 11:40:22,696][04410] Num frames 5600...
[2025-07-07 11:40:22,833][04410] Num frames 5700...
[2025-07-07 11:40:22,960][04410] Num frames 5800...
[2025-07-07 11:40:23,094][04410] Num frames 5900...
[2025-07-07 11:40:23,175][04410] Avg episode rewards: #0: 28.240, true rewards: #0: 11.840
[2025-07-07 11:40:23,176][04410] Avg episode reward: 28.240, avg true_objective: 11.840
[2025-07-07 11:40:23,281][04410] Num frames 6000...
[2025-07-07 11:40:23,407][04410] Num frames 6100...
[2025-07-07 11:40:23,533][04410] Num frames 6200...
[2025-07-07 11:40:23,658][04410] Num frames 6300...
[2025-07-07 11:40:23,792][04410] Num frames 6400...
[2025-07-07 11:40:23,919][04410] Num frames 6500...
[2025-07-07 11:40:24,046][04410] Num frames 6600...
[2025-07-07 11:40:24,175][04410] Num frames 6700...
[2025-07-07 11:40:24,300][04410] Num frames 6800...
[2025-07-07 11:40:24,428][04410] Num frames 6900...
[2025-07-07 11:40:24,500][04410] Avg episode rewards: #0: 27.188, true rewards: #0: 11.522
[2025-07-07 11:40:24,501][04410] Avg episode reward: 27.188, avg true_objective: 11.522
[2025-07-07 11:40:24,610][04410] Num frames 7000...
[2025-07-07 11:40:24,735][04410] Num frames 7100...
[2025-07-07 11:40:24,872][04410] Num frames 7200...
[2025-07-07 11:40:25,001][04410] Num frames 7300...
[2025-07-07 11:40:25,125][04410] Num frames 7400...
[2025-07-07 11:40:25,256][04410] Num frames 7500...
[2025-07-07 11:40:25,382][04410] Num frames 7600...
[2025-07-07 11:40:25,509][04410] Num frames 7700...
[2025-07-07 11:40:25,638][04410] Num frames 7800...
[2025-07-07 11:40:25,770][04410] Num frames 7900...
[2025-07-07 11:40:25,959][04410] Num frames 8000...
[2025-07-07 11:40:26,139][04410] Num frames 8100...
[2025-07-07 11:40:26,313][04410] Num frames 8200...
[2025-07-07 11:40:26,478][04410] Num frames 8300...
[2025-07-07 11:40:26,645][04410] Num frames 8400...
[2025-07-07 11:40:26,812][04410] Num frames 8500...
[2025-07-07 11:40:26,984][04410] Num frames 8600...
[2025-07-07 11:40:27,159][04410] Num frames 8700...
[2025-07-07 11:40:27,343][04410] Num frames 8800...
[2025-07-07 11:40:27,517][04410] Num frames 8900...
[2025-07-07 11:40:27,699][04410] Num frames 9000...
[2025-07-07 11:40:27,783][04410] Avg episode rewards: #0: 31.590, true rewards: #0: 12.876
[2025-07-07 11:40:27,785][04410] Avg episode reward: 31.590, avg true_objective: 12.876
[2025-07-07 11:40:27,931][04410] Num frames 9100...
[2025-07-07 11:40:28,067][04410] Num frames 9200...
[2025-07-07 11:40:28,193][04410] Num frames 9300...
[2025-07-07 11:40:28,324][04410] Num frames 9400...
[2025-07-07 11:40:28,453][04410] Num frames 9500...
[2025-07-07 11:40:28,582][04410] Num frames 9600...
[2025-07-07 11:40:28,709][04410] Num frames 9700...
[2025-07-07 11:40:28,838][04410] Num frames 9800...
[2025-07-07 11:40:28,974][04410] Num frames 9900...
[2025-07-07 11:40:29,103][04410] Num frames 10000...
[2025-07-07 11:40:29,234][04410] Num frames 10100...
[2025-07-07 11:40:29,366][04410] Num frames 10200...
[2025-07-07 11:40:29,494][04410] Num frames 10300...
[2025-07-07 11:40:29,619][04410] Num frames 10400...
[2025-07-07 11:40:29,746][04410] Num frames 10500...
[2025-07-07 11:40:29,902][04410] Avg episode rewards: #0: 32.476, true rewards: #0: 13.226
[2025-07-07 11:40:29,903][04410] Avg episode reward: 32.476, avg true_objective: 13.226
[2025-07-07 11:40:29,928][04410] Num frames 10600...
[2025-07-07 11:40:30,067][04410] Num frames 10700...
[2025-07-07 11:40:30,192][04410] Num frames 10800...
[2025-07-07 11:40:30,324][04410] Num frames 10900...
[2025-07-07 11:40:30,450][04410] Num frames 11000...
[2025-07-07 11:40:30,584][04410] Avg episode rewards: #0: 29.845, true rewards: #0: 12.290
[2025-07-07 11:40:30,584][04410] Avg episode reward: 29.845, avg true_objective: 12.290
[2025-07-07 11:40:30,635][04410] Num frames 11100...
[2025-07-07 11:40:30,762][04410] Num frames 11200...
[2025-07-07 11:40:30,890][04410] Num frames 11300...
[2025-07-07 11:40:31,017][04410] Num frames 11400...
[2025-07-07 11:40:31,154][04410] Num frames 11500...
[2025-07-07 11:40:31,283][04410] Num frames 11600...
[2025-07-07 11:40:31,410][04410] Num frames 11700...
[2025-07-07 11:40:31,537][04410] Num frames 11800...
[2025-07-07 11:40:31,666][04410] Num frames 11900...
[2025-07-07 11:40:31,794][04410] Num frames 12000...
[2025-07-07 11:40:31,921][04410] Num frames 12100...
[2025-07-07 11:40:32,047][04410] Num frames 12200...
[2025-07-07 11:40:32,185][04410] Num frames 12300...
[2025-07-07 11:40:32,314][04410] Num frames 12400...
[2025-07-07 11:40:32,443][04410] Num frames 12500...
[2025-07-07 11:40:32,570][04410] Num frames 12600...
[2025-07-07 11:40:32,701][04410] Num frames 12700...
[2025-07-07 11:40:32,828][04410] Avg episode rewards: #0: 31.754, true rewards: #0: 12.754
[2025-07-07 11:40:32,828][04410] Avg episode reward: 31.754, avg true_objective: 12.754
[2025-07-07 11:41:42,522][04410] Replay video saved to /content/train_dir/default_experiment/replay.mp4!