amostof's picture
Upload folder using huggingface_hub
f77de48 verified
[2025-02-27 22:35:21,752][00667] Saving configuration to /content/train_dir/default_experiment/config.json...
[2025-02-27 22:35:21,755][00667] Rollout worker 0 uses device cpu
[2025-02-27 22:35:21,756][00667] Rollout worker 1 uses device cpu
[2025-02-27 22:35:21,757][00667] Rollout worker 2 uses device cpu
[2025-02-27 22:35:21,758][00667] Rollout worker 3 uses device cpu
[2025-02-27 22:35:21,759][00667] Rollout worker 4 uses device cpu
[2025-02-27 22:35:21,759][00667] Rollout worker 5 uses device cpu
[2025-02-27 22:35:21,760][00667] Rollout worker 6 uses device cpu
[2025-02-27 22:35:21,761][00667] Rollout worker 7 uses device cpu
[2025-02-27 22:35:21,934][00667] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-02-27 22:35:21,935][00667] InferenceWorker_p0-w0: min num requests: 2
[2025-02-27 22:35:21,973][00667] Starting all processes...
[2025-02-27 22:35:21,974][00667] Starting process learner_proc0
[2025-02-27 22:35:22,129][00667] Starting all processes...
[2025-02-27 22:35:22,136][00667] Starting process inference_proc0-0
[2025-02-27 22:35:22,137][00667] Starting process rollout_proc0
[2025-02-27 22:35:22,139][00667] Starting process rollout_proc1
[2025-02-27 22:35:22,139][00667] Starting process rollout_proc2
[2025-02-27 22:35:22,139][00667] Starting process rollout_proc3
[2025-02-27 22:35:22,139][00667] Starting process rollout_proc4
[2025-02-27 22:35:22,139][00667] Starting process rollout_proc5
[2025-02-27 22:35:22,139][00667] Starting process rollout_proc6
[2025-02-27 22:35:22,139][00667] Starting process rollout_proc7
[2025-02-27 22:35:38,135][02863] Worker 6 uses CPU cores [0]
[2025-02-27 22:35:38,284][02843] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-02-27 22:35:38,285][02843] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2025-02-27 22:35:38,303][02862] Worker 7 uses CPU cores [1]
[2025-02-27 22:35:38,361][02843] Num visible devices: 1
[2025-02-27 22:35:38,394][02843] Starting seed is not provided
[2025-02-27 22:35:38,395][02843] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-02-27 22:35:38,396][02843] Initializing actor-critic model on device cuda:0
[2025-02-27 22:35:38,397][02843] RunningMeanStd input shape: (3, 72, 128)
[2025-02-27 22:35:38,403][02843] RunningMeanStd input shape: (1,)
[2025-02-27 22:35:38,433][02860] Worker 5 uses CPU cores [1]
[2025-02-27 22:35:38,495][02843] ConvEncoder: input_channels=3
[2025-02-27 22:35:38,562][02861] Worker 4 uses CPU cores [0]
[2025-02-27 22:35:38,725][02859] Worker 2 uses CPU cores [0]
[2025-02-27 22:35:38,811][02857] Worker 0 uses CPU cores [0]
[2025-02-27 22:35:38,812][02864] Worker 3 uses CPU cores [1]
[2025-02-27 22:35:38,828][02856] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-02-27 22:35:38,829][02856] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2025-02-27 22:35:38,848][02858] Worker 1 uses CPU cores [1]
[2025-02-27 22:35:38,903][02856] Num visible devices: 1
[2025-02-27 22:35:39,097][02843] Conv encoder output size: 512
[2025-02-27 22:35:39,098][02843] Policy head output size: 512
[2025-02-27 22:35:39,187][02843] Created Actor Critic model with architecture:
[2025-02-27 22:35:39,188][02843] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2025-02-27 22:35:39,624][02843] Using optimizer <class 'torch.optim.adam.Adam'>
[2025-02-27 22:35:41,935][00667] Heartbeat connected on InferenceWorker_p0-w0
[2025-02-27 22:35:41,943][00667] Heartbeat connected on Batcher_0
[2025-02-27 22:35:41,944][00667] Heartbeat connected on RolloutWorker_w0
[2025-02-27 22:35:41,952][00667] Heartbeat connected on RolloutWorker_w1
[2025-02-27 22:35:41,959][00667] Heartbeat connected on RolloutWorker_w3
[2025-02-27 22:35:41,960][00667] Heartbeat connected on RolloutWorker_w2
[2025-02-27 22:35:41,964][00667] Heartbeat connected on RolloutWorker_w4
[2025-02-27 22:35:41,967][00667] Heartbeat connected on RolloutWorker_w5
[2025-02-27 22:35:41,971][00667] Heartbeat connected on RolloutWorker_w6
[2025-02-27 22:35:41,973][00667] Heartbeat connected on RolloutWorker_w7
[2025-02-27 22:35:44,978][02843] No checkpoints found
[2025-02-27 22:35:44,978][02843] Did not load from checkpoint, starting from scratch!
[2025-02-27 22:35:44,978][02843] Initialized policy 0 weights for model version 0
[2025-02-27 22:35:44,981][02843] LearnerWorker_p0 finished initialization!
[2025-02-27 22:35:44,981][00667] Heartbeat connected on LearnerWorker_p0
[2025-02-27 22:35:44,982][02843] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-02-27 22:35:45,137][02856] RunningMeanStd input shape: (3, 72, 128)
[2025-02-27 22:35:45,139][02856] RunningMeanStd input shape: (1,)
[2025-02-27 22:35:45,151][02856] ConvEncoder: input_channels=3
[2025-02-27 22:35:45,252][02856] Conv encoder output size: 512
[2025-02-27 22:35:45,252][02856] Policy head output size: 512
[2025-02-27 22:35:45,289][00667] Inference worker 0-0 is ready!
[2025-02-27 22:35:45,290][00667] All inference workers are ready! Signal rollout workers to start!
[2025-02-27 22:35:45,492][00667] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2025-02-27 22:35:45,574][02863] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-02-27 22:35:45,572][02859] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-02-27 22:35:45,594][02857] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-02-27 22:35:45,639][02862] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-02-27 22:35:45,648][02864] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-02-27 22:35:45,662][02861] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-02-27 22:35:45,722][02860] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-02-27 22:35:45,720][02858] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-02-27 22:35:47,267][02859] Decorrelating experience for 0 frames...
[2025-02-27 22:35:47,270][02863] Decorrelating experience for 0 frames...
[2025-02-27 22:35:47,269][02857] Decorrelating experience for 0 frames...
[2025-02-27 22:35:47,287][02862] Decorrelating experience for 0 frames...
[2025-02-27 22:35:47,292][02864] Decorrelating experience for 0 frames...
[2025-02-27 22:35:47,343][02860] Decorrelating experience for 0 frames...
[2025-02-27 22:35:47,341][02858] Decorrelating experience for 0 frames...
[2025-02-27 22:35:48,066][02862] Decorrelating experience for 32 frames...
[2025-02-27 22:35:48,129][02864] Decorrelating experience for 32 frames...
[2025-02-27 22:35:48,379][02863] Decorrelating experience for 32 frames...
[2025-02-27 22:35:48,718][02861] Decorrelating experience for 0 frames...
[2025-02-27 22:35:48,783][02859] Decorrelating experience for 32 frames...
[2025-02-27 22:35:49,324][02864] Decorrelating experience for 64 frames...
[2025-02-27 22:35:49,460][02858] Decorrelating experience for 32 frames...
[2025-02-27 22:35:49,747][02860] Decorrelating experience for 32 frames...
[2025-02-27 22:35:49,824][02857] Decorrelating experience for 32 frames...
[2025-02-27 22:35:50,294][02862] Decorrelating experience for 64 frames...
[2025-02-27 22:35:50,319][02861] Decorrelating experience for 32 frames...
[2025-02-27 22:35:50,467][02863] Decorrelating experience for 64 frames...
[2025-02-27 22:35:50,491][00667] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2025-02-27 22:35:50,861][02859] Decorrelating experience for 64 frames...
[2025-02-27 22:35:51,027][02858] Decorrelating experience for 64 frames...
[2025-02-27 22:35:51,311][02860] Decorrelating experience for 64 frames...
[2025-02-27 22:35:51,883][02857] Decorrelating experience for 64 frames...
[2025-02-27 22:35:51,908][02864] Decorrelating experience for 96 frames...
[2025-02-27 22:35:52,332][02863] Decorrelating experience for 96 frames...
[2025-02-27 22:35:52,385][02862] Decorrelating experience for 96 frames...
[2025-02-27 22:35:52,684][02861] Decorrelating experience for 64 frames...
[2025-02-27 22:35:53,146][02859] Decorrelating experience for 96 frames...
[2025-02-27 22:35:54,732][02857] Decorrelating experience for 96 frames...
[2025-02-27 22:35:55,491][00667] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 13.8. Samples: 138. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2025-02-27 22:35:55,492][00667] Avg episode reward: [(0, '1.040')]
[2025-02-27 22:35:55,945][02861] Decorrelating experience for 96 frames...
[2025-02-27 22:35:56,520][02860] Decorrelating experience for 96 frames...
[2025-02-27 22:35:56,797][02858] Decorrelating experience for 96 frames...
[2025-02-27 22:35:59,552][02843] Signal inference workers to stop experience collection...
[2025-02-27 22:35:59,570][02856] InferenceWorker_p0-w0: stopping experience collection
[2025-02-27 22:36:00,491][00667] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 174.8. Samples: 2622. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2025-02-27 22:36:00,492][00667] Avg episode reward: [(0, '2.470')]
[2025-02-27 22:36:00,840][02843] Signal inference workers to resume experience collection...
[2025-02-27 22:36:00,841][02856] InferenceWorker_p0-w0: resuming experience collection
[2025-02-27 22:36:05,491][00667] Fps is (10 sec: 2867.2, 60 sec: 1433.6, 300 sec: 1433.6). Total num frames: 28672. Throughput: 0: 262.1. Samples: 5242. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:36:05,495][00667] Avg episode reward: [(0, '3.709')]
[2025-02-27 22:36:08,188][02856] Updated weights for policy 0, policy_version 10 (0.0040)
[2025-02-27 22:36:10,491][00667] Fps is (10 sec: 4505.6, 60 sec: 1802.3, 300 sec: 1802.3). Total num frames: 45056. Throughput: 0: 471.5. Samples: 11788. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:36:10,493][00667] Avg episode reward: [(0, '4.309')]
[2025-02-27 22:36:15,491][00667] Fps is (10 sec: 3686.4, 60 sec: 2184.6, 300 sec: 2184.6). Total num frames: 65536. Throughput: 0: 564.0. Samples: 16920. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:36:15,494][00667] Avg episode reward: [(0, '4.533')]
[2025-02-27 22:36:18,759][02856] Updated weights for policy 0, policy_version 20 (0.0021)
[2025-02-27 22:36:20,491][00667] Fps is (10 sec: 4096.0, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 86016. Throughput: 0: 582.4. Samples: 20382. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:36:20,492][00667] Avg episode reward: [(0, '4.375')]
[2025-02-27 22:36:25,491][00667] Fps is (10 sec: 4096.0, 60 sec: 2662.4, 300 sec: 2662.4). Total num frames: 106496. Throughput: 0: 669.4. Samples: 26776. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:36:25,495][00667] Avg episode reward: [(0, '4.264')]
[2025-02-27 22:36:25,506][02843] Saving new best policy, reward=4.264!
[2025-02-27 22:36:29,889][02856] Updated weights for policy 0, policy_version 30 (0.0018)
[2025-02-27 22:36:30,491][00667] Fps is (10 sec: 3686.4, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 122880. Throughput: 0: 707.3. Samples: 31830. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:36:30,494][00667] Avg episode reward: [(0, '4.378')]
[2025-02-27 22:36:30,498][02843] Saving new best policy, reward=4.378!
[2025-02-27 22:36:35,491][00667] Fps is (10 sec: 3686.4, 60 sec: 2867.2, 300 sec: 2867.2). Total num frames: 143360. Throughput: 0: 764.4. Samples: 34400. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:36:35,498][00667] Avg episode reward: [(0, '4.411')]
[2025-02-27 22:36:35,507][02843] Saving new best policy, reward=4.411!
[2025-02-27 22:36:40,492][00667] Fps is (10 sec: 3686.2, 60 sec: 2904.4, 300 sec: 2904.4). Total num frames: 159744. Throughput: 0: 893.0. Samples: 40322. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-02-27 22:36:40,496][00667] Avg episode reward: [(0, '4.445')]
[2025-02-27 22:36:40,500][02843] Saving new best policy, reward=4.445!
[2025-02-27 22:36:41,213][02856] Updated weights for policy 0, policy_version 40 (0.0013)
[2025-02-27 22:36:45,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3003.8, 300 sec: 3003.8). Total num frames: 180224. Throughput: 0: 955.3. Samples: 45612. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:36:45,495][00667] Avg episode reward: [(0, '4.444')]
[2025-02-27 22:36:50,491][00667] Fps is (10 sec: 4096.2, 60 sec: 3345.1, 300 sec: 3087.8). Total num frames: 200704. Throughput: 0: 971.3. Samples: 48952. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-02-27 22:36:50,493][00667] Avg episode reward: [(0, '4.473')]
[2025-02-27 22:36:50,497][02843] Saving new best policy, reward=4.473!
[2025-02-27 22:36:50,826][02856] Updated weights for policy 0, policy_version 50 (0.0018)
[2025-02-27 22:36:55,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3101.3). Total num frames: 217088. Throughput: 0: 960.6. Samples: 55016. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:36:55,494][00667] Avg episode reward: [(0, '4.417')]
[2025-02-27 22:37:00,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3167.6). Total num frames: 237568. Throughput: 0: 962.2. Samples: 60218. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-02-27 22:37:00,495][00667] Avg episode reward: [(0, '4.413')]
[2025-02-27 22:37:01,916][02856] Updated weights for policy 0, policy_version 60 (0.0017)
[2025-02-27 22:37:05,491][00667] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3276.8). Total num frames: 262144. Throughput: 0: 960.7. Samples: 63614. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:37:05,496][00667] Avg episode reward: [(0, '4.588')]
[2025-02-27 22:37:05,502][02843] Saving new best policy, reward=4.588!
[2025-02-27 22:37:10,492][00667] Fps is (10 sec: 4095.5, 60 sec: 3891.1, 300 sec: 3276.8). Total num frames: 278528. Throughput: 0: 953.6. Samples: 69688. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:37:10,495][00667] Avg episode reward: [(0, '4.480')]
[2025-02-27 22:37:12,814][02856] Updated weights for policy 0, policy_version 70 (0.0019)
[2025-02-27 22:37:15,491][00667] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3276.8). Total num frames: 294912. Throughput: 0: 951.2. Samples: 74632. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:37:15,495][00667] Avg episode reward: [(0, '4.431')]
[2025-02-27 22:37:15,503][02843] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000072_294912.pth...
[2025-02-27 22:37:20,491][00667] Fps is (10 sec: 4096.5, 60 sec: 3891.2, 300 sec: 3363.0). Total num frames: 319488. Throughput: 0: 967.6. Samples: 77942. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-02-27 22:37:20,495][00667] Avg episode reward: [(0, '4.492')]
[2025-02-27 22:37:22,325][02856] Updated weights for policy 0, policy_version 80 (0.0020)
[2025-02-27 22:37:25,491][00667] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3358.7). Total num frames: 335872. Throughput: 0: 968.1. Samples: 83886. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:37:25,497][00667] Avg episode reward: [(0, '4.450')]
[2025-02-27 22:37:30,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3393.8). Total num frames: 356352. Throughput: 0: 974.2. Samples: 89452. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-02-27 22:37:30,495][00667] Avg episode reward: [(0, '4.370')]
[2025-02-27 22:37:34,258][02856] Updated weights for policy 0, policy_version 90 (0.0015)
[2025-02-27 22:37:35,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3388.5). Total num frames: 372736. Throughput: 0: 957.7. Samples: 92048. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:37:35,492][00667] Avg episode reward: [(0, '4.482')]
[2025-02-27 22:37:40,491][00667] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3383.7). Total num frames: 389120. Throughput: 0: 950.5. Samples: 97788. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-02-27 22:37:40,495][00667] Avg episode reward: [(0, '4.610')]
[2025-02-27 22:37:40,497][02843] Saving new best policy, reward=4.610!
[2025-02-27 22:37:44,979][02856] Updated weights for policy 0, policy_version 100 (0.0014)
[2025-02-27 22:37:45,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3413.3). Total num frames: 409600. Throughput: 0: 962.8. Samples: 103544. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:37:45,492][00667] Avg episode reward: [(0, '4.645')]
[2025-02-27 22:37:45,499][02843] Saving new best policy, reward=4.645!
[2025-02-27 22:37:50,491][00667] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3473.4). Total num frames: 434176. Throughput: 0: 962.8. Samples: 106938. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:37:50,492][00667] Avg episode reward: [(0, '4.486')]
[2025-02-27 22:37:54,676][02856] Updated weights for policy 0, policy_version 110 (0.0017)
[2025-02-27 22:37:55,491][00667] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3465.9). Total num frames: 450560. Throughput: 0: 959.3. Samples: 112854. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:37:55,497][00667] Avg episode reward: [(0, '4.383')]
[2025-02-27 22:38:00,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3489.2). Total num frames: 471040. Throughput: 0: 981.0. Samples: 118776. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-02-27 22:38:00,496][00667] Avg episode reward: [(0, '4.368')]
[2025-02-27 22:38:04,304][02856] Updated weights for policy 0, policy_version 120 (0.0021)
[2025-02-27 22:38:05,491][00667] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3540.1). Total num frames: 495616. Throughput: 0: 986.1. Samples: 122318. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:38:05,493][00667] Avg episode reward: [(0, '4.584')]
[2025-02-27 22:38:10,495][00667] Fps is (10 sec: 4094.5, 60 sec: 3891.0, 300 sec: 3531.0). Total num frames: 512000. Throughput: 0: 979.9. Samples: 127984. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:38:10,496][00667] Avg episode reward: [(0, '4.622')]
[2025-02-27 22:38:14,917][02856] Updated weights for policy 0, policy_version 130 (0.0030)
[2025-02-27 22:38:15,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3549.9). Total num frames: 532480. Throughput: 0: 993.6. Samples: 134162. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:38:15,492][00667] Avg episode reward: [(0, '4.434')]
[2025-02-27 22:38:20,491][00667] Fps is (10 sec: 4507.2, 60 sec: 3959.5, 300 sec: 3593.9). Total num frames: 557056. Throughput: 0: 1013.6. Samples: 137660. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:38:20,495][00667] Avg episode reward: [(0, '4.362')]
[2025-02-27 22:38:25,356][02856] Updated weights for policy 0, policy_version 140 (0.0026)
[2025-02-27 22:38:25,491][00667] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3584.0). Total num frames: 573440. Throughput: 0: 1008.3. Samples: 143160. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:38:25,498][00667] Avg episode reward: [(0, '4.296')]
[2025-02-27 22:38:30,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3599.5). Total num frames: 593920. Throughput: 0: 1016.4. Samples: 149282. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:38:30,495][00667] Avg episode reward: [(0, '4.256')]
[2025-02-27 22:38:35,000][02856] Updated weights for policy 0, policy_version 150 (0.0017)
[2025-02-27 22:38:35,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3614.1). Total num frames: 614400. Throughput: 0: 1007.5. Samples: 152274. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:38:35,495][00667] Avg episode reward: [(0, '4.093')]
[2025-02-27 22:38:40,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3604.5). Total num frames: 630784. Throughput: 0: 998.2. Samples: 157774. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:38:40,494][00667] Avg episode reward: [(0, '4.261')]
[2025-02-27 22:38:45,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3618.1). Total num frames: 651264. Throughput: 0: 1006.8. Samples: 164082. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:38:45,495][00667] Avg episode reward: [(0, '4.614')]
[2025-02-27 22:38:45,712][02856] Updated weights for policy 0, policy_version 160 (0.0015)
[2025-02-27 22:38:50,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3653.2). Total num frames: 675840. Throughput: 0: 1004.8. Samples: 167536. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:38:50,492][00667] Avg episode reward: [(0, '4.680')]
[2025-02-27 22:38:50,494][02843] Saving new best policy, reward=4.680!
[2025-02-27 22:38:55,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3643.3). Total num frames: 692224. Throughput: 0: 997.3. Samples: 172858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:38:55,500][00667] Avg episode reward: [(0, '4.649')]
[2025-02-27 22:38:56,346][02856] Updated weights for policy 0, policy_version 170 (0.0030)
[2025-02-27 22:39:00,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3654.9). Total num frames: 712704. Throughput: 0: 1002.0. Samples: 179250. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:39:00,495][00667] Avg episode reward: [(0, '4.647')]
[2025-02-27 22:39:05,122][02856] Updated weights for policy 0, policy_version 180 (0.0028)
[2025-02-27 22:39:05,494][00667] Fps is (10 sec: 4504.1, 60 sec: 4027.5, 300 sec: 3686.3). Total num frames: 737280. Throughput: 0: 1002.5. Samples: 182778. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:39:05,497][00667] Avg episode reward: [(0, '4.471')]
[2025-02-27 22:39:10,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.7, 300 sec: 3656.4). Total num frames: 749568. Throughput: 0: 995.2. Samples: 187944. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2025-02-27 22:39:10,492][00667] Avg episode reward: [(0, '4.635')]
[2025-02-27 22:39:15,491][00667] Fps is (10 sec: 3687.6, 60 sec: 4027.7, 300 sec: 3686.4). Total num frames: 774144. Throughput: 0: 1007.7. Samples: 194628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:39:15,492][00667] Avg episode reward: [(0, '4.753')]
[2025-02-27 22:39:15,502][02843] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000189_774144.pth...
[2025-02-27 22:39:15,632][02843] Saving new best policy, reward=4.753!
[2025-02-27 22:39:15,878][02856] Updated weights for policy 0, policy_version 190 (0.0028)
[2025-02-27 22:39:20,491][00667] Fps is (10 sec: 4915.2, 60 sec: 4027.7, 300 sec: 3715.0). Total num frames: 798720. Throughput: 0: 1016.2. Samples: 198002. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:39:20,497][00667] Avg episode reward: [(0, '4.519')]
[2025-02-27 22:39:25,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3686.4). Total num frames: 811008. Throughput: 0: 1009.3. Samples: 203192. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-02-27 22:39:25,493][00667] Avg episode reward: [(0, '4.323')]
[2025-02-27 22:39:26,540][02856] Updated weights for policy 0, policy_version 200 (0.0018)
[2025-02-27 22:39:30,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3713.7). Total num frames: 835584. Throughput: 0: 1015.2. Samples: 209768. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:39:30,494][00667] Avg episode reward: [(0, '4.541')]
[2025-02-27 22:39:35,354][02856] Updated weights for policy 0, policy_version 210 (0.0030)
[2025-02-27 22:39:35,491][00667] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3739.8). Total num frames: 860160. Throughput: 0: 1018.9. Samples: 213388. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:39:35,492][00667] Avg episode reward: [(0, '4.589')]
[2025-02-27 22:39:40,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3712.6). Total num frames: 872448. Throughput: 0: 1010.5. Samples: 218330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:39:40,496][00667] Avg episode reward: [(0, '4.635')]
[2025-02-27 22:39:45,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 3737.6). Total num frames: 897024. Throughput: 0: 1018.7. Samples: 225090. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:39:45,492][00667] Avg episode reward: [(0, '4.442')]
[2025-02-27 22:39:45,949][02856] Updated weights for policy 0, policy_version 220 (0.0016)
[2025-02-27 22:39:50,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3744.9). Total num frames: 917504. Throughput: 0: 1017.5. Samples: 228564. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:39:50,495][00667] Avg episode reward: [(0, '4.403')]
[2025-02-27 22:39:55,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3735.6). Total num frames: 933888. Throughput: 0: 1010.5. Samples: 233416. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:39:55,495][00667] Avg episode reward: [(0, '4.627')]
[2025-02-27 22:39:56,568][02856] Updated weights for policy 0, policy_version 230 (0.0022)
[2025-02-27 22:40:00,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3758.7). Total num frames: 958464. Throughput: 0: 1015.4. Samples: 240320. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:40:00,495][00667] Avg episode reward: [(0, '4.749')]
[2025-02-27 22:40:05,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4028.0, 300 sec: 3765.2). Total num frames: 978944. Throughput: 0: 1018.6. Samples: 243840. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:40:05,494][00667] Avg episode reward: [(0, '4.670')]
[2025-02-27 22:40:06,137][02856] Updated weights for policy 0, policy_version 240 (0.0027)
[2025-02-27 22:40:10,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 3756.0). Total num frames: 995328. Throughput: 0: 1010.1. Samples: 248646. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:40:10,496][00667] Avg episode reward: [(0, '4.538')]
[2025-02-27 22:40:15,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3777.4). Total num frames: 1019904. Throughput: 0: 1020.0. Samples: 255670. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:40:15,496][00667] Avg episode reward: [(0, '4.739')]
[2025-02-27 22:40:16,038][02856] Updated weights for policy 0, policy_version 250 (0.0031)
[2025-02-27 22:40:20,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3783.2). Total num frames: 1040384. Throughput: 0: 1017.2. Samples: 259162. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:40:20,498][00667] Avg episode reward: [(0, '4.669')]
[2025-02-27 22:40:25,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 3774.2). Total num frames: 1056768. Throughput: 0: 1016.0. Samples: 264052. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:40:25,495][00667] Avg episode reward: [(0, '4.719')]
[2025-02-27 22:40:26,647][02856] Updated weights for policy 0, policy_version 260 (0.0027)
[2025-02-27 22:40:30,491][00667] Fps is (10 sec: 4095.9, 60 sec: 4096.0, 300 sec: 3794.2). Total num frames: 1081344. Throughput: 0: 1019.9. Samples: 270986. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:40:30,495][00667] Avg episode reward: [(0, '5.061')]
[2025-02-27 22:40:30,499][02843] Saving new best policy, reward=5.061!
[2025-02-27 22:40:35,496][00667] Fps is (10 sec: 4503.3, 60 sec: 4027.4, 300 sec: 3799.3). Total num frames: 1101824. Throughput: 0: 1020.9. Samples: 274512. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:40:35,500][00667] Avg episode reward: [(0, '4.967')]
[2025-02-27 22:40:36,255][02856] Updated weights for policy 0, policy_version 270 (0.0013)
[2025-02-27 22:40:40,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 3790.5). Total num frames: 1118208. Throughput: 0: 1021.6. Samples: 279386. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-02-27 22:40:40,495][00667] Avg episode reward: [(0, '5.160')]
[2025-02-27 22:40:40,497][02843] Saving new best policy, reward=5.160!
[2025-02-27 22:40:45,491][00667] Fps is (10 sec: 4098.1, 60 sec: 4096.0, 300 sec: 3873.8). Total num frames: 1142784. Throughput: 0: 1024.6. Samples: 286426. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:40:45,496][00667] Avg episode reward: [(0, '5.400')]
[2025-02-27 22:40:45,503][02843] Saving new best policy, reward=5.400!
[2025-02-27 22:40:46,180][02856] Updated weights for policy 0, policy_version 280 (0.0032)
[2025-02-27 22:40:50,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3943.3). Total num frames: 1163264. Throughput: 0: 1022.1. Samples: 289834. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:40:50,493][00667] Avg episode reward: [(0, '5.088')]
[2025-02-27 22:40:55,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 1179648. Throughput: 0: 1021.6. Samples: 294618. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:40:55,492][00667] Avg episode reward: [(0, '4.982')]
[2025-02-27 22:40:56,735][02856] Updated weights for policy 0, policy_version 290 (0.0026)
[2025-02-27 22:41:00,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 1204224. Throughput: 0: 1021.3. Samples: 301630. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:41:00,493][00667] Avg episode reward: [(0, '5.363')]
[2025-02-27 22:41:05,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 1224704. Throughput: 0: 1020.7. Samples: 305094. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:41:05,492][00667] Avg episode reward: [(0, '5.499')]
[2025-02-27 22:41:05,502][02843] Saving new best policy, reward=5.499!
[2025-02-27 22:41:06,837][02856] Updated weights for policy 0, policy_version 300 (0.0027)
[2025-02-27 22:41:10,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 1241088. Throughput: 0: 1017.8. Samples: 309854. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-02-27 22:41:10,494][00667] Avg episode reward: [(0, '5.270')]
[2025-02-27 22:41:15,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 1265664. Throughput: 0: 1024.0. Samples: 317068. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:41:15,496][00667] Avg episode reward: [(0, '5.693')]
[2025-02-27 22:41:15,504][02843] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000309_1265664.pth...
[2025-02-27 22:41:15,633][02843] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000072_294912.pth
[2025-02-27 22:41:15,650][02843] Saving new best policy, reward=5.693!
[2025-02-27 22:41:16,289][02856] Updated weights for policy 0, policy_version 310 (0.0015)
[2025-02-27 22:41:20,491][00667] Fps is (10 sec: 4095.8, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 1282048. Throughput: 0: 1019.5. Samples: 320384. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-02-27 22:41:20,494][00667] Avg episode reward: [(0, '5.987')]
[2025-02-27 22:41:20,497][02843] Saving new best policy, reward=5.987!
[2025-02-27 22:41:25,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 1302528. Throughput: 0: 1019.2. Samples: 325250. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:41:25,492][00667] Avg episode reward: [(0, '6.041')]
[2025-02-27 22:41:25,498][02843] Saving new best policy, reward=6.041!
[2025-02-27 22:41:26,943][02856] Updated weights for policy 0, policy_version 320 (0.0022)
[2025-02-27 22:41:30,491][00667] Fps is (10 sec: 4505.7, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 1327104. Throughput: 0: 1017.0. Samples: 332192. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:41:30,494][00667] Avg episode reward: [(0, '5.601')]
[2025-02-27 22:41:35,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4028.1, 300 sec: 4012.7). Total num frames: 1343488. Throughput: 0: 1014.4. Samples: 335484. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:41:35,493][00667] Avg episode reward: [(0, '5.583')]
[2025-02-27 22:41:37,320][02856] Updated weights for policy 0, policy_version 330 (0.0017)
[2025-02-27 22:41:40,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 1363968. Throughput: 0: 1021.1. Samples: 340568. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:41:40,493][00667] Avg episode reward: [(0, '5.699')]
[2025-02-27 22:41:45,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 1388544. Throughput: 0: 1022.4. Samples: 347640. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:41:45,492][00667] Avg episode reward: [(0, '5.808')]
[2025-02-27 22:41:46,175][02856] Updated weights for policy 0, policy_version 340 (0.0028)
[2025-02-27 22:41:50,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 1404928. Throughput: 0: 1018.4. Samples: 350920. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:41:50,492][00667] Avg episode reward: [(0, '5.931')]
[2025-02-27 22:41:55,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 1425408. Throughput: 0: 1026.6. Samples: 356050. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:41:55,492][00667] Avg episode reward: [(0, '5.793')]
[2025-02-27 22:41:56,816][02856] Updated weights for policy 0, policy_version 350 (0.0027)
[2025-02-27 22:42:00,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 1449984. Throughput: 0: 1021.0. Samples: 363012. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:42:00,493][00667] Avg episode reward: [(0, '5.794')]
[2025-02-27 22:42:05,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 1466368. Throughput: 0: 1016.6. Samples: 366132. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:42:05,495][00667] Avg episode reward: [(0, '6.400')]
[2025-02-27 22:42:05,503][02843] Saving new best policy, reward=6.400!
[2025-02-27 22:42:07,555][02856] Updated weights for policy 0, policy_version 360 (0.0013)
[2025-02-27 22:42:10,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 1486848. Throughput: 0: 1021.9. Samples: 371236. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:42:10,493][00667] Avg episode reward: [(0, '6.694')]
[2025-02-27 22:42:10,494][02843] Saving new best policy, reward=6.694!
[2025-02-27 22:42:15,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 1511424. Throughput: 0: 1023.4. Samples: 378244. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:42:15,492][00667] Avg episode reward: [(0, '6.093')]
[2025-02-27 22:42:16,278][02856] Updated weights for policy 0, policy_version 370 (0.0016)
[2025-02-27 22:42:20,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 1527808. Throughput: 0: 1015.3. Samples: 381172. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:42:20,493][00667] Avg episode reward: [(0, '6.621')]
[2025-02-27 22:42:25,492][00667] Fps is (10 sec: 3276.5, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 1544192. Throughput: 0: 1014.7. Samples: 386230. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:42:25,496][00667] Avg episode reward: [(0, '7.437')]
[2025-02-27 22:42:25,503][02843] Saving new best policy, reward=7.437!
[2025-02-27 22:42:27,407][02856] Updated weights for policy 0, policy_version 380 (0.0020)
[2025-02-27 22:42:30,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 1568768. Throughput: 0: 1005.0. Samples: 392866. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:42:30,495][00667] Avg episode reward: [(0, '7.705')]
[2025-02-27 22:42:30,502][02843] Saving new best policy, reward=7.705!
[2025-02-27 22:42:35,491][00667] Fps is (10 sec: 4096.4, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 1585152. Throughput: 0: 994.4. Samples: 395670. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:42:35,495][00667] Avg episode reward: [(0, '7.508')]
[2025-02-27 22:42:38,428][02856] Updated weights for policy 0, policy_version 390 (0.0019)
[2025-02-27 22:42:40,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 1605632. Throughput: 0: 998.1. Samples: 400966. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:42:40,495][00667] Avg episode reward: [(0, '7.609')]
[2025-02-27 22:42:45,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 1630208. Throughput: 0: 999.7. Samples: 407998. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:42:45,492][00667] Avg episode reward: [(0, '7.389')]
[2025-02-27 22:42:47,324][02856] Updated weights for policy 0, policy_version 400 (0.0015)
[2025-02-27 22:42:50,493][00667] Fps is (10 sec: 4095.3, 60 sec: 4027.6, 300 sec: 4054.3). Total num frames: 1646592. Throughput: 0: 991.7. Samples: 410762. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:42:50,499][00667] Avg episode reward: [(0, '7.640')]
[2025-02-27 22:42:55,491][00667] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 1667072. Throughput: 0: 1000.0. Samples: 416236. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:42:55,492][00667] Avg episode reward: [(0, '8.206')]
[2025-02-27 22:42:55,501][02843] Saving new best policy, reward=8.206!
[2025-02-27 22:42:57,989][02856] Updated weights for policy 0, policy_version 410 (0.0017)
[2025-02-27 22:43:00,491][00667] Fps is (10 sec: 4096.7, 60 sec: 3959.5, 300 sec: 4040.5). Total num frames: 1687552. Throughput: 0: 996.3. Samples: 423078. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:43:00,493][00667] Avg episode reward: [(0, '9.103')]
[2025-02-27 22:43:00,497][02843] Saving new best policy, reward=9.103!
[2025-02-27 22:43:05,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 4040.5). Total num frames: 1703936. Throughput: 0: 989.6. Samples: 425706. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:43:05,494][00667] Avg episode reward: [(0, '8.880')]
[2025-02-27 22:43:08,730][02856] Updated weights for policy 0, policy_version 420 (0.0022)
[2025-02-27 22:43:10,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 1728512. Throughput: 0: 1002.5. Samples: 431342. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:43:10,492][00667] Avg episode reward: [(0, '7.950')]
[2025-02-27 22:43:15,491][00667] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 4040.5). Total num frames: 1748992. Throughput: 0: 1011.1. Samples: 438364. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:43:15,493][00667] Avg episode reward: [(0, '7.996')]
[2025-02-27 22:43:15,587][02843] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000428_1753088.pth...
[2025-02-27 22:43:15,721][02843] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000189_774144.pth
[2025-02-27 22:43:18,016][02856] Updated weights for policy 0, policy_version 430 (0.0014)
[2025-02-27 22:43:20,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 4040.5). Total num frames: 1765376. Throughput: 0: 1004.8. Samples: 440888. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-02-27 22:43:20,496][00667] Avg episode reward: [(0, '8.196')]
[2025-02-27 22:43:25,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.8, 300 sec: 4040.5). Total num frames: 1785856. Throughput: 0: 1013.7. Samples: 446582. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-02-27 22:43:25,495][00667] Avg episode reward: [(0, '9.006')]
[2025-02-27 22:43:28,181][02856] Updated weights for policy 0, policy_version 440 (0.0013)
[2025-02-27 22:43:30,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 1810432. Throughput: 0: 1008.2. Samples: 453368. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-02-27 22:43:30,496][00667] Avg episode reward: [(0, '9.897')]
[2025-02-27 22:43:30,501][02843] Saving new best policy, reward=9.897!
[2025-02-27 22:43:35,491][00667] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 4040.5). Total num frames: 1822720. Throughput: 0: 1000.9. Samples: 455802. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2025-02-27 22:43:35,495][00667] Avg episode reward: [(0, '10.882')]
[2025-02-27 22:43:35,626][02843] Saving new best policy, reward=10.882!
[2025-02-27 22:43:39,163][02856] Updated weights for policy 0, policy_version 450 (0.0047)
[2025-02-27 22:43:40,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 1847296. Throughput: 0: 1004.9. Samples: 461456. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:43:40,495][00667] Avg episode reward: [(0, '11.701')]
[2025-02-27 22:43:40,498][02843] Saving new best policy, reward=11.701!
[2025-02-27 22:43:45,491][00667] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 4040.5). Total num frames: 1867776. Throughput: 0: 1003.6. Samples: 468240. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:43:45,497][00667] Avg episode reward: [(0, '13.319')]
[2025-02-27 22:43:45,572][02843] Saving new best policy, reward=13.319!
[2025-02-27 22:43:49,893][02856] Updated weights for policy 0, policy_version 460 (0.0026)
[2025-02-27 22:43:50,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 4040.5). Total num frames: 1884160. Throughput: 0: 996.0. Samples: 470526. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:43:50,495][00667] Avg episode reward: [(0, '13.692')]
[2025-02-27 22:43:50,497][02843] Saving new best policy, reward=13.692!
[2025-02-27 22:43:55,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 4040.5). Total num frames: 1904640. Throughput: 0: 998.1. Samples: 476258. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:43:55,495][00667] Avg episode reward: [(0, '14.203')]
[2025-02-27 22:43:55,504][02843] Saving new best policy, reward=14.203!
[2025-02-27 22:43:59,174][02856] Updated weights for policy 0, policy_version 470 (0.0023)
[2025-02-27 22:44:00,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 1929216. Throughput: 0: 992.7. Samples: 483034. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:44:00,493][00667] Avg episode reward: [(0, '13.192')]
[2025-02-27 22:44:05,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 4040.5). Total num frames: 1941504. Throughput: 0: 987.7. Samples: 485336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:44:05,492][00667] Avg episode reward: [(0, '12.667')]
[2025-02-27 22:44:10,111][02856] Updated weights for policy 0, policy_version 480 (0.0026)
[2025-02-27 22:44:10,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 4040.5). Total num frames: 1966080. Throughput: 0: 992.6. Samples: 491248. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:44:10,493][00667] Avg episode reward: [(0, '12.911')]
[2025-02-27 22:44:15,491][00667] Fps is (10 sec: 4915.1, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 1990656. Throughput: 0: 998.9. Samples: 498318. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:44:15,493][00667] Avg episode reward: [(0, '12.540')]
[2025-02-27 22:44:20,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 4040.5). Total num frames: 2002944. Throughput: 0: 991.6. Samples: 500424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:44:20,493][00667] Avg episode reward: [(0, '13.884')]
[2025-02-27 22:44:20,858][02856] Updated weights for policy 0, policy_version 490 (0.0020)
[2025-02-27 22:44:25,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 2027520. Throughput: 0: 1002.0. Samples: 506548. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:44:25,497][00667] Avg episode reward: [(0, '14.532')]
[2025-02-27 22:44:25,504][02843] Saving new best policy, reward=14.532!
[2025-02-27 22:44:29,355][02856] Updated weights for policy 0, policy_version 500 (0.0016)
[2025-02-27 22:44:30,492][00667] Fps is (10 sec: 4914.9, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 2052096. Throughput: 0: 1007.6. Samples: 513584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:44:30,497][00667] Avg episode reward: [(0, '14.955')]
[2025-02-27 22:44:30,498][02843] Saving new best policy, reward=14.955!
[2025-02-27 22:44:35,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 2064384. Throughput: 0: 1002.6. Samples: 515642. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:44:35,496][00667] Avg episode reward: [(0, '16.307')]
[2025-02-27 22:44:35,504][02843] Saving new best policy, reward=16.307!
[2025-02-27 22:44:40,237][02856] Updated weights for policy 0, policy_version 510 (0.0018)
[2025-02-27 22:44:40,491][00667] Fps is (10 sec: 3686.7, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 2088960. Throughput: 0: 1009.6. Samples: 521692. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:44:40,496][00667] Avg episode reward: [(0, '16.501')]
[2025-02-27 22:44:40,502][02843] Saving new best policy, reward=16.501!
[2025-02-27 22:44:45,493][00667] Fps is (10 sec: 4504.7, 60 sec: 4027.6, 300 sec: 4040.4). Total num frames: 2109440. Throughput: 0: 1013.1. Samples: 528626. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-02-27 22:44:45,494][00667] Avg episode reward: [(0, '16.809')]
[2025-02-27 22:44:45,503][02843] Saving new best policy, reward=16.809!
[2025-02-27 22:44:50,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 2125824. Throughput: 0: 1008.5. Samples: 530720. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-02-27 22:44:50,493][00667] Avg episode reward: [(0, '17.440')]
[2025-02-27 22:44:50,497][02843] Saving new best policy, reward=17.440!
[2025-02-27 22:44:51,099][02856] Updated weights for policy 0, policy_version 520 (0.0019)
[2025-02-27 22:44:55,491][00667] Fps is (10 sec: 4096.8, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2150400. Throughput: 0: 1015.0. Samples: 536922. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:44:55,495][00667] Avg episode reward: [(0, '16.189')]
[2025-02-27 22:44:59,675][02856] Updated weights for policy 0, policy_version 530 (0.0017)
[2025-02-27 22:45:00,491][00667] Fps is (10 sec: 4505.5, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 2170880. Throughput: 0: 1014.3. Samples: 543960. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:45:00,494][00667] Avg episode reward: [(0, '16.796')]
[2025-02-27 22:45:05,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2187264. Throughput: 0: 1014.6. Samples: 546080. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:45:05,496][00667] Avg episode reward: [(0, '16.845')]
[2025-02-27 22:45:10,407][02856] Updated weights for policy 0, policy_version 540 (0.0020)
[2025-02-27 22:45:10,491][00667] Fps is (10 sec: 4096.1, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2211840. Throughput: 0: 1018.1. Samples: 552362. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:45:10,492][00667] Avg episode reward: [(0, '18.291')]
[2025-02-27 22:45:10,494][02843] Saving new best policy, reward=18.291!
[2025-02-27 22:45:15,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 2232320. Throughput: 0: 1014.5. Samples: 559236. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:45:15,498][00667] Avg episode reward: [(0, '17.939')]
[2025-02-27 22:45:15,514][02843] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000545_2232320.pth...
[2025-02-27 22:45:15,664][02843] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000309_1265664.pth
[2025-02-27 22:45:20,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2248704. Throughput: 0: 1015.8. Samples: 561352. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:45:20,492][00667] Avg episode reward: [(0, '18.339')]
[2025-02-27 22:45:20,498][02843] Saving new best policy, reward=18.339!
[2025-02-27 22:45:21,063][02856] Updated weights for policy 0, policy_version 550 (0.0020)
[2025-02-27 22:45:25,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2273280. Throughput: 0: 1022.8. Samples: 567720. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:45:25,495][00667] Avg episode reward: [(0, '17.796')]
[2025-02-27 22:45:29,752][02856] Updated weights for policy 0, policy_version 560 (0.0012)
[2025-02-27 22:45:30,498][00667] Fps is (10 sec: 4502.7, 60 sec: 4027.3, 300 sec: 4040.4). Total num frames: 2293760. Throughput: 0: 1019.4. Samples: 574506. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:45:30,501][00667] Avg episode reward: [(0, '15.491')]
[2025-02-27 22:45:35,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2310144. Throughput: 0: 1018.5. Samples: 576554. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:45:35,495][00667] Avg episode reward: [(0, '15.523')]
[2025-02-27 22:45:40,491][00667] Fps is (10 sec: 3688.8, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 2330624. Throughput: 0: 1018.3. Samples: 582744. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:45:40,496][00667] Avg episode reward: [(0, '16.160')]
[2025-02-27 22:45:40,591][02856] Updated weights for policy 0, policy_version 570 (0.0016)
[2025-02-27 22:45:45,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4096.1, 300 sec: 4040.5). Total num frames: 2355200. Throughput: 0: 1006.5. Samples: 589252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:45:45,495][00667] Avg episode reward: [(0, '16.029')]
[2025-02-27 22:45:50,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 2367488. Throughput: 0: 1004.8. Samples: 591298. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:45:50,493][00667] Avg episode reward: [(0, '16.321')]
[2025-02-27 22:45:51,526][02856] Updated weights for policy 0, policy_version 580 (0.0015)
[2025-02-27 22:45:55,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 2392064. Throughput: 0: 1005.1. Samples: 597592. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:45:55,492][00667] Avg episode reward: [(0, '17.341')]
[2025-02-27 22:46:00,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 2412544. Throughput: 0: 1001.4. Samples: 604298. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:46:00,494][00667] Avg episode reward: [(0, '17.320')]
[2025-02-27 22:46:00,784][02856] Updated weights for policy 0, policy_version 590 (0.0033)
[2025-02-27 22:46:05,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 2428928. Throughput: 0: 1000.5. Samples: 606374. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-02-27 22:46:05,496][00667] Avg episode reward: [(0, '17.397')]
[2025-02-27 22:46:10,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 2453504. Throughput: 0: 1005.3. Samples: 612958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:46:10,496][00667] Avg episode reward: [(0, '18.056')]
[2025-02-27 22:46:11,039][02856] Updated weights for policy 0, policy_version 600 (0.0015)
[2025-02-27 22:46:15,492][00667] Fps is (10 sec: 4505.3, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 2473984. Throughput: 0: 1001.8. Samples: 619582. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:46:15,493][00667] Avg episode reward: [(0, '19.341')]
[2025-02-27 22:46:15,516][02843] Saving new best policy, reward=19.341!
[2025-02-27 22:46:20,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 2490368. Throughput: 0: 1000.7. Samples: 621586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:46:20,493][00667] Avg episode reward: [(0, '18.948')]
[2025-02-27 22:46:21,626][02856] Updated weights for policy 0, policy_version 610 (0.0015)
[2025-02-27 22:46:25,491][00667] Fps is (10 sec: 4096.2, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 2514944. Throughput: 0: 1012.1. Samples: 628290. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:46:25,492][00667] Avg episode reward: [(0, '18.695')]
[2025-02-27 22:46:30,491][00667] Fps is (10 sec: 4505.7, 60 sec: 4028.2, 300 sec: 4040.5). Total num frames: 2535424. Throughput: 0: 1005.8. Samples: 634514. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-02-27 22:46:30,494][00667] Avg episode reward: [(0, '18.574')]
[2025-02-27 22:46:31,507][02856] Updated weights for policy 0, policy_version 620 (0.0019)
[2025-02-27 22:46:35,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 2551808. Throughput: 0: 1005.4. Samples: 636542. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:46:35,492][00667] Avg episode reward: [(0, '17.057')]
[2025-02-27 22:46:40,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 2576384. Throughput: 0: 1021.6. Samples: 643564. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:46:40,495][00667] Avg episode reward: [(0, '17.121')]
[2025-02-27 22:46:41,179][02856] Updated weights for policy 0, policy_version 630 (0.0021)
[2025-02-27 22:46:45,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 2596864. Throughput: 0: 1010.3. Samples: 649760. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:46:45,492][00667] Avg episode reward: [(0, '17.384')]
[2025-02-27 22:46:50,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 2613248. Throughput: 0: 1011.4. Samples: 651888. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-02-27 22:46:50,492][00667] Avg episode reward: [(0, '17.557')]
[2025-02-27 22:46:51,833][02856] Updated weights for policy 0, policy_version 640 (0.0022)
[2025-02-27 22:46:55,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 2637824. Throughput: 0: 1021.1. Samples: 658908. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:46:55,493][00667] Avg episode reward: [(0, '18.648')]
[2025-02-27 22:47:00,497][00667] Fps is (10 sec: 4093.6, 60 sec: 4027.3, 300 sec: 4026.5). Total num frames: 2654208. Throughput: 0: 1010.2. Samples: 665046. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:47:00,498][00667] Avg episode reward: [(0, '18.940')]
[2025-02-27 22:47:02,002][02856] Updated weights for policy 0, policy_version 650 (0.0021)
[2025-02-27 22:47:05,491][00667] Fps is (10 sec: 3686.3, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 2674688. Throughput: 0: 1014.2. Samples: 667226. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:47:05,493][00667] Avg episode reward: [(0, '18.864')]
[2025-02-27 22:47:10,491][00667] Fps is (10 sec: 4508.2, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 2699264. Throughput: 0: 1022.7. Samples: 674312. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:47:10,492][00667] Avg episode reward: [(0, '20.270')]
[2025-02-27 22:47:10,498][02843] Saving new best policy, reward=20.270!
[2025-02-27 22:47:11,157][02856] Updated weights for policy 0, policy_version 660 (0.0032)
[2025-02-27 22:47:15,491][00667] Fps is (10 sec: 4096.1, 60 sec: 4027.8, 300 sec: 4026.6). Total num frames: 2715648. Throughput: 0: 1018.0. Samples: 680326. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:47:15,492][00667] Avg episode reward: [(0, '20.107')]
[2025-02-27 22:47:15,540][02843] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000664_2719744.pth...
[2025-02-27 22:47:15,700][02843] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000428_1753088.pth
[2025-02-27 22:47:20,491][00667] Fps is (10 sec: 3686.3, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2736128. Throughput: 0: 1025.2. Samples: 682674. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:47:20,493][00667] Avg episode reward: [(0, '21.103')]
[2025-02-27 22:47:20,494][02843] Saving new best policy, reward=21.103!
[2025-02-27 22:47:21,729][02856] Updated weights for policy 0, policy_version 670 (0.0024)
[2025-02-27 22:47:25,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2760704. Throughput: 0: 1024.4. Samples: 689660. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-02-27 22:47:25,494][00667] Avg episode reward: [(0, '20.021')]
[2025-02-27 22:47:30,494][00667] Fps is (10 sec: 4095.0, 60 sec: 4027.6, 300 sec: 4040.4). Total num frames: 2777088. Throughput: 0: 1014.8. Samples: 695428. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:47:30,495][00667] Avg episode reward: [(0, '19.505')]
[2025-02-27 22:47:32,352][02856] Updated weights for policy 0, policy_version 680 (0.0016)
[2025-02-27 22:47:35,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2797568. Throughput: 0: 1023.7. Samples: 697954. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:47:35,496][00667] Avg episode reward: [(0, '19.645')]
[2025-02-27 22:47:40,491][00667] Fps is (10 sec: 4506.7, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2822144. Throughput: 0: 1024.0. Samples: 704988. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-02-27 22:47:40,492][00667] Avg episode reward: [(0, '20.016')]
[2025-02-27 22:47:41,192][02856] Updated weights for policy 0, policy_version 690 (0.0018)
[2025-02-27 22:47:45,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 2838528. Throughput: 0: 1008.8. Samples: 710434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:47:45,492][00667] Avg episode reward: [(0, '19.865')]
[2025-02-27 22:47:50,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2859008. Throughput: 0: 1022.0. Samples: 713214. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:47:50,496][00667] Avg episode reward: [(0, '21.266')]
[2025-02-27 22:47:50,501][02843] Saving new best policy, reward=21.266!
[2025-02-27 22:47:51,884][02856] Updated weights for policy 0, policy_version 700 (0.0023)
[2025-02-27 22:47:55,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4054.3). Total num frames: 2883584. Throughput: 0: 1020.0. Samples: 720210. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:47:55,493][00667] Avg episode reward: [(0, '19.937')]
[2025-02-27 22:48:00,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.4, 300 sec: 4054.3). Total num frames: 2899968. Throughput: 0: 1004.6. Samples: 725532. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:48:00,492][00667] Avg episode reward: [(0, '19.159')]
[2025-02-27 22:48:02,415][02856] Updated weights for policy 0, policy_version 710 (0.0030)
[2025-02-27 22:48:05,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2920448. Throughput: 0: 1018.3. Samples: 728498. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:48:05,492][00667] Avg episode reward: [(0, '19.905')]
[2025-02-27 22:48:10,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4054.3). Total num frames: 2945024. Throughput: 0: 1016.5. Samples: 735402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:48:10,492][00667] Avg episode reward: [(0, '20.022')]
[2025-02-27 22:48:11,348][02856] Updated weights for policy 0, policy_version 720 (0.0014)
[2025-02-27 22:48:15,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 2957312. Throughput: 0: 1004.8. Samples: 740642. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-02-27 22:48:15,492][00667] Avg episode reward: [(0, '20.215')]
[2025-02-27 22:48:20,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4054.3). Total num frames: 2981888. Throughput: 0: 1017.5. Samples: 743742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:48:20,492][00667] Avg episode reward: [(0, '21.558')]
[2025-02-27 22:48:20,497][02843] Saving new best policy, reward=21.558!
[2025-02-27 22:48:22,049][02856] Updated weights for policy 0, policy_version 730 (0.0020)
[2025-02-27 22:48:25,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3002368. Throughput: 0: 1014.8. Samples: 750652. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:48:25,492][00667] Avg episode reward: [(0, '22.828')]
[2025-02-27 22:48:25,499][02843] Saving new best policy, reward=22.828!
[2025-02-27 22:48:30,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.9, 300 sec: 4054.3). Total num frames: 3018752. Throughput: 0: 1002.7. Samples: 755554. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:48:30,494][00667] Avg episode reward: [(0, '21.720')]
[2025-02-27 22:48:32,853][02856] Updated weights for policy 0, policy_version 740 (0.0013)
[2025-02-27 22:48:35,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3039232. Throughput: 0: 1011.2. Samples: 758720. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:48:35,499][00667] Avg episode reward: [(0, '21.092')]
[2025-02-27 22:48:40,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3063808. Throughput: 0: 1009.9. Samples: 765654. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:48:40,496][00667] Avg episode reward: [(0, '22.290')]
[2025-02-27 22:48:42,430][02856] Updated weights for policy 0, policy_version 750 (0.0019)
[2025-02-27 22:48:45,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3080192. Throughput: 0: 1000.8. Samples: 770566. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:48:45,496][00667] Avg episode reward: [(0, '22.868')]
[2025-02-27 22:48:45,507][02843] Saving new best policy, reward=22.868!
[2025-02-27 22:48:50,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3100672. Throughput: 0: 1008.8. Samples: 773894. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:48:50,495][00667] Avg episode reward: [(0, '22.682')]
[2025-02-27 22:48:52,415][02856] Updated weights for policy 0, policy_version 760 (0.0012)
[2025-02-27 22:48:55,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3125248. Throughput: 0: 1011.2. Samples: 780904. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-02-27 22:48:55,492][00667] Avg episode reward: [(0, '23.860')]
[2025-02-27 22:48:55,499][02843] Saving new best policy, reward=23.860!
[2025-02-27 22:49:00,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4068.2). Total num frames: 3141632. Throughput: 0: 1002.2. Samples: 785742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:49:00,492][00667] Avg episode reward: [(0, '25.088')]
[2025-02-27 22:49:00,500][02843] Saving new best policy, reward=25.088!
[2025-02-27 22:49:03,250][02856] Updated weights for policy 0, policy_version 770 (0.0018)
[2025-02-27 22:49:05,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3162112. Throughput: 0: 1008.1. Samples: 789106. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:49:05,492][00667] Avg episode reward: [(0, '23.972')]
[2025-02-27 22:49:10,492][00667] Fps is (10 sec: 4095.5, 60 sec: 3959.4, 300 sec: 4040.4). Total num frames: 3182592. Throughput: 0: 1003.5. Samples: 795812. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:49:10,493][00667] Avg episode reward: [(0, '24.662')]
[2025-02-27 22:49:13,836][02856] Updated weights for policy 0, policy_version 780 (0.0024)
[2025-02-27 22:49:15,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3198976. Throughput: 0: 1001.5. Samples: 800622. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:49:15,495][00667] Avg episode reward: [(0, '24.213')]
[2025-02-27 22:49:15,504][02843] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000781_3198976.pth...
[2025-02-27 22:49:15,644][02843] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000545_2232320.pth
[2025-02-27 22:49:20,491][00667] Fps is (10 sec: 4096.3, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3223552. Throughput: 0: 1009.1. Samples: 804128. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:49:20,494][00667] Avg episode reward: [(0, '23.773')]
[2025-02-27 22:49:23,064][02856] Updated weights for policy 0, policy_version 790 (0.0020)
[2025-02-27 22:49:25,492][00667] Fps is (10 sec: 4505.1, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3244032. Throughput: 0: 1009.4. Samples: 811078. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:49:25,493][00667] Avg episode reward: [(0, '23.922')]
[2025-02-27 22:49:30,491][00667] Fps is (10 sec: 3686.5, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3260416. Throughput: 0: 1008.0. Samples: 815926. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:49:30,495][00667] Avg episode reward: [(0, '24.966')]
[2025-02-27 22:49:33,539][02856] Updated weights for policy 0, policy_version 800 (0.0017)
[2025-02-27 22:49:35,491][00667] Fps is (10 sec: 4096.4, 60 sec: 4096.0, 300 sec: 4054.3). Total num frames: 3284992. Throughput: 0: 1012.4. Samples: 819452. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:49:35,492][00667] Avg episode reward: [(0, '23.577')]
[2025-02-27 22:49:40,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4054.4). Total num frames: 3305472. Throughput: 0: 1007.7. Samples: 826250. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:49:40,495][00667] Avg episode reward: [(0, '24.236')]
[2025-02-27 22:49:44,430][02856] Updated weights for policy 0, policy_version 810 (0.0021)
[2025-02-27 22:49:45,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3321856. Throughput: 0: 1007.2. Samples: 831064. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-02-27 22:49:45,493][00667] Avg episode reward: [(0, '24.118')]
[2025-02-27 22:49:50,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3342336. Throughput: 0: 1010.3. Samples: 834568. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:49:50,493][00667] Avg episode reward: [(0, '24.325')]
[2025-02-27 22:49:53,241][02856] Updated weights for policy 0, policy_version 820 (0.0037)
[2025-02-27 22:49:55,497][00667] Fps is (10 sec: 4093.7, 60 sec: 3959.1, 300 sec: 4040.4). Total num frames: 3362816. Throughput: 0: 1011.1. Samples: 841318. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:49:55,498][00667] Avg episode reward: [(0, '23.911')]
[2025-02-27 22:50:00,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3383296. Throughput: 0: 1017.0. Samples: 846386. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:50:00,493][00667] Avg episode reward: [(0, '23.747')]
[2025-02-27 22:50:03,891][02856] Updated weights for policy 0, policy_version 830 (0.0018)
[2025-02-27 22:50:05,491][00667] Fps is (10 sec: 4098.3, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3403776. Throughput: 0: 1017.7. Samples: 849924. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:50:05,492][00667] Avg episode reward: [(0, '21.443')]
[2025-02-27 22:50:10,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.8, 300 sec: 4040.5). Total num frames: 3424256. Throughput: 0: 1008.0. Samples: 856436. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2025-02-27 22:50:10,495][00667] Avg episode reward: [(0, '21.617')]
[2025-02-27 22:50:14,493][02856] Updated weights for policy 0, policy_version 840 (0.0012)
[2025-02-27 22:50:15,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 4054.3). Total num frames: 3444736. Throughput: 0: 1017.8. Samples: 861728. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:50:15,497][00667] Avg episode reward: [(0, '22.072')]
[2025-02-27 22:50:20,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.8, 300 sec: 4040.5). Total num frames: 3465216. Throughput: 0: 1017.0. Samples: 865216. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:50:20,500][00667] Avg episode reward: [(0, '21.748')]
[2025-02-27 22:50:23,270][02856] Updated weights for policy 0, policy_version 850 (0.0012)
[2025-02-27 22:50:25,493][00667] Fps is (10 sec: 4095.2, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3485696. Throughput: 0: 1009.2. Samples: 871666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:50:25,498][00667] Avg episode reward: [(0, '22.160')]
[2025-02-27 22:50:30,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 4054.3). Total num frames: 3506176. Throughput: 0: 1022.1. Samples: 877060. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:50:30,492][00667] Avg episode reward: [(0, '24.899')]
[2025-02-27 22:50:33,800][02856] Updated weights for policy 0, policy_version 860 (0.0022)
[2025-02-27 22:50:35,491][00667] Fps is (10 sec: 4096.7, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3526656. Throughput: 0: 1022.5. Samples: 880580. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:50:35,501][00667] Avg episode reward: [(0, '25.433')]
[2025-02-27 22:50:35,509][02843] Saving new best policy, reward=25.433!
[2025-02-27 22:50:40,491][00667] Fps is (10 sec: 4095.9, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3547136. Throughput: 0: 1007.8. Samples: 886662. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:50:40,495][00667] Avg episode reward: [(0, '26.060')]
[2025-02-27 22:50:40,499][02843] Saving new best policy, reward=26.060!
[2025-02-27 22:50:44,761][02856] Updated weights for policy 0, policy_version 870 (0.0025)
[2025-02-27 22:50:45,491][00667] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3563520. Throughput: 0: 1014.7. Samples: 892048. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:50:45,496][00667] Avg episode reward: [(0, '24.284')]
[2025-02-27 22:50:50,491][00667] Fps is (10 sec: 4096.1, 60 sec: 4096.0, 300 sec: 4054.3). Total num frames: 3588096. Throughput: 0: 1014.5. Samples: 895578. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:50:50,496][00667] Avg episode reward: [(0, '24.607')]
[2025-02-27 22:50:54,394][02856] Updated weights for policy 0, policy_version 880 (0.0014)
[2025-02-27 22:50:55,491][00667] Fps is (10 sec: 4096.1, 60 sec: 4028.1, 300 sec: 4040.5). Total num frames: 3604480. Throughput: 0: 1004.4. Samples: 901634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:50:55,495][00667] Avg episode reward: [(0, '24.087')]
[2025-02-27 22:51:00,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3624960. Throughput: 0: 1012.1. Samples: 907272. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:51:00,496][00667] Avg episode reward: [(0, '23.485')]
[2025-02-27 22:51:04,200][02856] Updated weights for policy 0, policy_version 890 (0.0023)
[2025-02-27 22:51:05,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4054.3). Total num frames: 3649536. Throughput: 0: 1012.3. Samples: 910770. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:51:05,497][00667] Avg episode reward: [(0, '24.489')]
[2025-02-27 22:51:10,492][00667] Fps is (10 sec: 4095.4, 60 sec: 4027.6, 300 sec: 4040.5). Total num frames: 3665920. Throughput: 0: 999.3. Samples: 916634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:51:10,499][00667] Avg episode reward: [(0, '24.282')]
[2025-02-27 22:51:15,096][02856] Updated weights for policy 0, policy_version 900 (0.0020)
[2025-02-27 22:51:15,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3686400. Throughput: 0: 1008.1. Samples: 922426. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:51:15,493][00667] Avg episode reward: [(0, '24.725')]
[2025-02-27 22:51:15,501][02843] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000900_3686400.pth...
[2025-02-27 22:51:15,624][02843] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000664_2719744.pth
[2025-02-27 22:51:20,491][00667] Fps is (10 sec: 4096.6, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3706880. Throughput: 0: 1006.0. Samples: 925850. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:51:20,497][00667] Avg episode reward: [(0, '24.962')]
[2025-02-27 22:51:25,495][00667] Fps is (10 sec: 3685.1, 60 sec: 3959.4, 300 sec: 4026.5). Total num frames: 3723264. Throughput: 0: 997.8. Samples: 931566. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:51:25,496][00667] Avg episode reward: [(0, '25.208')]
[2025-02-27 22:51:25,588][02856] Updated weights for policy 0, policy_version 910 (0.0012)
[2025-02-27 22:51:30,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 3747840. Throughput: 0: 1010.0. Samples: 937498. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:51:30,497][00667] Avg episode reward: [(0, '24.078')]
[2025-02-27 22:51:34,785][02856] Updated weights for policy 0, policy_version 920 (0.0015)
[2025-02-27 22:51:35,491][00667] Fps is (10 sec: 4507.2, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3768320. Throughput: 0: 1009.6. Samples: 941008. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:51:35,496][00667] Avg episode reward: [(0, '23.935')]
[2025-02-27 22:51:40,491][00667] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 4026.6). Total num frames: 3784704. Throughput: 0: 997.8. Samples: 946536. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:51:40,495][00667] Avg episode reward: [(0, '23.668')]
[2025-02-27 22:51:45,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.8, 300 sec: 4040.5). Total num frames: 3805184. Throughput: 0: 1009.2. Samples: 952688. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-02-27 22:51:45,492][00667] Avg episode reward: [(0, '22.459')]
[2025-02-27 22:51:45,528][02856] Updated weights for policy 0, policy_version 930 (0.0019)
[2025-02-27 22:51:50,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3829760. Throughput: 0: 1009.1. Samples: 956178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:51:50,496][00667] Avg episode reward: [(0, '21.904')]
[2025-02-27 22:51:55,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3846144. Throughput: 0: 998.7. Samples: 961572. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-02-27 22:51:55,493][00667] Avg episode reward: [(0, '23.073')]
[2025-02-27 22:51:56,184][02856] Updated weights for policy 0, policy_version 940 (0.0024)
[2025-02-27 22:52:00,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3866624. Throughput: 0: 1012.7. Samples: 967996. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:52:00,492][00667] Avg episode reward: [(0, '21.741')]
[2025-02-27 22:52:04,811][02856] Updated weights for policy 0, policy_version 950 (0.0026)
[2025-02-27 22:52:05,491][00667] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3891200. Throughput: 0: 1014.5. Samples: 971504. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:52:05,497][00667] Avg episode reward: [(0, '20.766')]
[2025-02-27 22:52:10,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4027.8, 300 sec: 4040.5). Total num frames: 3907584. Throughput: 0: 1005.9. Samples: 976828. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:52:10,497][00667] Avg episode reward: [(0, '21.111')]
[2025-02-27 22:52:15,491][00667] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4040.5). Total num frames: 3928064. Throughput: 0: 1019.3. Samples: 983366. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:52:15,497][00667] Avg episode reward: [(0, '20.198')]
[2025-02-27 22:52:15,647][02856] Updated weights for policy 0, policy_version 960 (0.0019)
[2025-02-27 22:52:20,492][00667] Fps is (10 sec: 4505.3, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 3952640. Throughput: 0: 1018.1. Samples: 986824. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2025-02-27 22:52:20,493][00667] Avg episode reward: [(0, '20.621')]
[2025-02-27 22:52:25,491][00667] Fps is (10 sec: 4096.0, 60 sec: 4096.2, 300 sec: 4040.5). Total num frames: 3969024. Throughput: 0: 1012.5. Samples: 992098. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:52:25,496][00667] Avg episode reward: [(0, '21.852')]
[2025-02-27 22:52:26,104][02856] Updated weights for policy 0, policy_version 970 (0.0020)
[2025-02-27 22:52:30,491][00667] Fps is (10 sec: 4096.3, 60 sec: 4096.0, 300 sec: 4054.3). Total num frames: 3993600. Throughput: 0: 1025.3. Samples: 998828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2025-02-27 22:52:30,496][00667] Avg episode reward: [(0, '23.355')]
[2025-02-27 22:52:33,154][02843] Stopping Batcher_0...
[2025-02-27 22:52:33,154][02843] Loop batcher_evt_loop terminating...
[2025-02-27 22:52:33,155][02843] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2025-02-27 22:52:33,160][00667] Component Batcher_0 stopped!
[2025-02-27 22:52:33,225][02856] Weights refcount: 2 0
[2025-02-27 22:52:33,230][00667] Component InferenceWorker_p0-w0 stopped!
[2025-02-27 22:52:33,239][02856] Stopping InferenceWorker_p0-w0...
[2025-02-27 22:52:33,239][02856] Loop inference_proc0-0_evt_loop terminating...
[2025-02-27 22:52:33,268][02843] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000781_3198976.pth
[2025-02-27 22:52:33,291][02843] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2025-02-27 22:52:33,462][02843] Stopping LearnerWorker_p0...
[2025-02-27 22:52:33,462][02843] Loop learner_proc0_evt_loop terminating...
[2025-02-27 22:52:33,463][00667] Component LearnerWorker_p0 stopped!
[2025-02-27 22:52:33,553][00667] Component RolloutWorker_w2 stopped!
[2025-02-27 22:52:33,558][02859] Stopping RolloutWorker_w2...
[2025-02-27 22:52:33,561][02859] Loop rollout_proc2_evt_loop terminating...
[2025-02-27 22:52:33,569][00667] Component RolloutWorker_w0 stopped!
[2025-02-27 22:52:33,573][02857] Stopping RolloutWorker_w0...
[2025-02-27 22:52:33,585][00667] Component RolloutWorker_w6 stopped!
[2025-02-27 22:52:33,589][02863] Stopping RolloutWorker_w6...
[2025-02-27 22:52:33,580][02857] Loop rollout_proc0_evt_loop terminating...
[2025-02-27 22:52:33,590][02863] Loop rollout_proc6_evt_loop terminating...
[2025-02-27 22:52:33,639][02862] Stopping RolloutWorker_w7...
[2025-02-27 22:52:33,639][00667] Component RolloutWorker_w7 stopped!
[2025-02-27 22:52:33,642][02862] Loop rollout_proc7_evt_loop terminating...
[2025-02-27 22:52:33,644][00667] Component RolloutWorker_w4 stopped!
[2025-02-27 22:52:33,649][02861] Stopping RolloutWorker_w4...
[2025-02-27 22:52:33,653][02861] Loop rollout_proc4_evt_loop terminating...
[2025-02-27 22:52:33,669][02860] Stopping RolloutWorker_w5...
[2025-02-27 22:52:33,669][00667] Component RolloutWorker_w5 stopped!
[2025-02-27 22:52:33,673][02860] Loop rollout_proc5_evt_loop terminating...
[2025-02-27 22:52:33,715][00667] Component RolloutWorker_w3 stopped!
[2025-02-27 22:52:33,715][02864] Stopping RolloutWorker_w3...
[2025-02-27 22:52:33,718][02864] Loop rollout_proc3_evt_loop terminating...
[2025-02-27 22:52:33,727][00667] Component RolloutWorker_w1 stopped!
[2025-02-27 22:52:33,727][02858] Stopping RolloutWorker_w1...
[2025-02-27 22:52:33,728][00667] Waiting for process learner_proc0 to stop...
[2025-02-27 22:52:33,728][02858] Loop rollout_proc1_evt_loop terminating...
[2025-02-27 22:52:35,398][00667] Waiting for process inference_proc0-0 to join...
[2025-02-27 22:52:35,399][00667] Waiting for process rollout_proc0 to join...
[2025-02-27 22:52:38,784][00667] Waiting for process rollout_proc1 to join...
[2025-02-27 22:52:38,785][00667] Waiting for process rollout_proc2 to join...
[2025-02-27 22:52:38,786][00667] Waiting for process rollout_proc3 to join...
[2025-02-27 22:52:38,787][00667] Waiting for process rollout_proc4 to join...
[2025-02-27 22:52:38,799][00667] Waiting for process rollout_proc5 to join...
[2025-02-27 22:52:38,800][00667] Waiting for process rollout_proc6 to join...
[2025-02-27 22:52:38,802][00667] Waiting for process rollout_proc7 to join...
[2025-02-27 22:52:38,803][00667] Batcher 0 profile tree view:
batching: 26.8878, releasing_batches: 0.0250
[2025-02-27 22:52:38,803][00667] InferenceWorker_p0-w0 profile tree view:
wait_policy: 0.0000
wait_policy_total: 387.4915
update_model: 8.1358
weight_update: 0.0020
one_step: 0.0025
handle_policy_step: 575.5230
deserialize: 13.4416, stack: 3.0129, obs_to_device_normalize: 120.9449, forward: 296.1265, send_messages: 27.8692
prepare_outputs: 89.5694
to_cpu: 56.1327
[2025-02-27 22:52:38,804][00667] Learner 0 profile tree view:
misc: 0.0039, prepare_batch: 13.1377
train: 74.3134
epoch_init: 0.0115, minibatch_init: 0.0057, losses_postprocess: 0.5702, kl_divergence: 0.6529, after_optimizer: 32.8442
calculate_losses: 28.1535
losses_init: 0.0035, forward_head: 1.3747, bptt_initial: 19.3942, tail: 1.2123, advantages_returns: 0.2800, losses: 3.5434
bptt: 2.0807
bptt_forward_core: 2.0029
update: 11.4262
clip: 0.8486
[2025-02-27 22:52:38,806][00667] RolloutWorker_w0 profile tree view:
wait_for_trajectories: 0.2275, enqueue_policy_requests: 93.5256, env_step: 797.3914, overhead: 11.8703, complete_rollouts: 6.9623
save_policy_outputs: 17.5343
split_output_tensors: 6.9380
[2025-02-27 22:52:38,807][00667] RolloutWorker_w7 profile tree view:
wait_for_trajectories: 0.2758, enqueue_policy_requests: 94.4212, env_step: 796.8897, overhead: 11.7316, complete_rollouts: 6.8725
save_policy_outputs: 16.9932
split_output_tensors: 6.7407
[2025-02-27 22:52:38,808][00667] Loop Runner_EvtLoop terminating...
[2025-02-27 22:52:38,809][00667] Runner profile tree view:
main_loop: 1036.8361
[2025-02-27 22:52:38,810][00667] Collected {0: 4005888}, FPS: 3863.6
[2025-02-27 22:52:39,240][00667] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2025-02-27 22:52:39,241][00667] Overriding arg 'num_workers' with value 1 passed from command line
[2025-02-27 22:52:39,242][00667] Adding new argument 'no_render'=True that is not in the saved config file!
[2025-02-27 22:52:39,243][00667] Adding new argument 'save_video'=True that is not in the saved config file!
[2025-02-27 22:52:39,244][00667] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2025-02-27 22:52:39,245][00667] Adding new argument 'video_name'=None that is not in the saved config file!
[2025-02-27 22:52:39,246][00667] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2025-02-27 22:52:39,247][00667] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2025-02-27 22:52:39,248][00667] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2025-02-27 22:52:39,249][00667] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2025-02-27 22:52:39,250][00667] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2025-02-27 22:52:39,251][00667] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2025-02-27 22:52:39,252][00667] Adding new argument 'train_script'=None that is not in the saved config file!
[2025-02-27 22:52:39,253][00667] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2025-02-27 22:52:39,254][00667] Using frameskip 1 and render_action_repeat=4 for evaluation
[2025-02-27 22:52:39,287][00667] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-02-27 22:52:39,290][00667] RunningMeanStd input shape: (3, 72, 128)
[2025-02-27 22:52:39,293][00667] RunningMeanStd input shape: (1,)
[2025-02-27 22:52:39,306][00667] ConvEncoder: input_channels=3
[2025-02-27 22:52:39,404][00667] Conv encoder output size: 512
[2025-02-27 22:52:39,404][00667] Policy head output size: 512
[2025-02-27 22:52:39,588][00667] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2025-02-27 22:52:40,342][00667] Num frames 100...
[2025-02-27 22:52:40,473][00667] Num frames 200...
[2025-02-27 22:52:40,604][00667] Num frames 300...
[2025-02-27 22:52:40,739][00667] Num frames 400...
[2025-02-27 22:52:40,869][00667] Num frames 500...
[2025-02-27 22:52:41,001][00667] Num frames 600...
[2025-02-27 22:52:41,138][00667] Num frames 700...
[2025-02-27 22:52:41,270][00667] Num frames 800...
[2025-02-27 22:52:41,400][00667] Num frames 900...
[2025-02-27 22:52:41,536][00667] Num frames 1000...
[2025-02-27 22:52:41,668][00667] Num frames 1100...
[2025-02-27 22:52:41,797][00667] Num frames 1200...
[2025-02-27 22:52:41,874][00667] Avg episode rewards: #0: 27.160, true rewards: #0: 12.160
[2025-02-27 22:52:41,874][00667] Avg episode reward: 27.160, avg true_objective: 12.160
[2025-02-27 22:52:41,995][00667] Num frames 1300...
[2025-02-27 22:52:42,133][00667] Num frames 1400...
[2025-02-27 22:52:42,262][00667] Num frames 1500...
[2025-02-27 22:52:42,391][00667] Num frames 1600...
[2025-02-27 22:52:42,527][00667] Num frames 1700...
[2025-02-27 22:52:42,657][00667] Num frames 1800...
[2025-02-27 22:52:42,784][00667] Avg episode rewards: #0: 18.780, true rewards: #0: 9.280
[2025-02-27 22:52:42,786][00667] Avg episode reward: 18.780, avg true_objective: 9.280
[2025-02-27 22:52:42,847][00667] Num frames 1900...
[2025-02-27 22:52:42,976][00667] Num frames 2000...
[2025-02-27 22:52:43,106][00667] Num frames 2100...
[2025-02-27 22:52:43,239][00667] Num frames 2200...
[2025-02-27 22:52:43,367][00667] Num frames 2300...
[2025-02-27 22:52:43,501][00667] Num frames 2400...
[2025-02-27 22:52:43,630][00667] Num frames 2500...
[2025-02-27 22:52:43,762][00667] Num frames 2600...
[2025-02-27 22:52:43,891][00667] Num frames 2700...
[2025-02-27 22:52:43,972][00667] Avg episode rewards: #0: 19.067, true rewards: #0: 9.067
[2025-02-27 22:52:43,973][00667] Avg episode reward: 19.067, avg true_objective: 9.067
[2025-02-27 22:52:44,079][00667] Num frames 2800...
[2025-02-27 22:52:44,215][00667] Num frames 2900...
[2025-02-27 22:52:44,344][00667] Num frames 3000...
[2025-02-27 22:52:44,477][00667] Num frames 3100...
[2025-02-27 22:52:44,612][00667] Num frames 3200...
[2025-02-27 22:52:44,750][00667] Num frames 3300...
[2025-02-27 22:52:44,878][00667] Num frames 3400...
[2025-02-27 22:52:45,012][00667] Num frames 3500...
[2025-02-27 22:52:45,143][00667] Num frames 3600...
[2025-02-27 22:52:45,290][00667] Num frames 3700...
[2025-02-27 22:52:45,417][00667] Num frames 3800...
[2025-02-27 22:52:45,557][00667] Num frames 3900...
[2025-02-27 22:52:45,686][00667] Num frames 4000...
[2025-02-27 22:52:45,816][00667] Num frames 4100...
[2025-02-27 22:52:45,949][00667] Num frames 4200...
[2025-02-27 22:52:46,085][00667] Num frames 4300...
[2025-02-27 22:52:46,224][00667] Num frames 4400...
[2025-02-27 22:52:46,365][00667] Num frames 4500...
[2025-02-27 22:52:46,511][00667] Num frames 4600...
[2025-02-27 22:52:46,577][00667] Avg episode rewards: #0: 24.520, true rewards: #0: 11.520
[2025-02-27 22:52:46,578][00667] Avg episode reward: 24.520, avg true_objective: 11.520
[2025-02-27 22:52:46,700][00667] Num frames 4700...
[2025-02-27 22:52:46,834][00667] Num frames 4800...
[2025-02-27 22:52:46,967][00667] Num frames 4900...
[2025-02-27 22:52:47,104][00667] Num frames 5000...
[2025-02-27 22:52:47,238][00667] Num frames 5100...
[2025-02-27 22:52:47,376][00667] Num frames 5200...
[2025-02-27 22:52:47,510][00667] Num frames 5300...
[2025-02-27 22:52:47,646][00667] Num frames 5400...
[2025-02-27 22:52:47,776][00667] Num frames 5500...
[2025-02-27 22:52:47,907][00667] Num frames 5600...
[2025-02-27 22:52:48,054][00667] Avg episode rewards: #0: 24.338, true rewards: #0: 11.338
[2025-02-27 22:52:48,055][00667] Avg episode reward: 24.338, avg true_objective: 11.338
[2025-02-27 22:52:48,103][00667] Num frames 5700...
[2025-02-27 22:52:48,250][00667] Num frames 5800...
[2025-02-27 22:52:48,393][00667] Num frames 5900...
[2025-02-27 22:52:48,531][00667] Num frames 6000...
[2025-02-27 22:52:48,677][00667] Num frames 6100...
[2025-02-27 22:52:48,849][00667] Num frames 6200...
[2025-02-27 22:52:49,027][00667] Num frames 6300...
[2025-02-27 22:52:49,157][00667] Avg episode rewards: #0: 22.235, true rewards: #0: 10.568
[2025-02-27 22:52:49,160][00667] Avg episode reward: 22.235, avg true_objective: 10.568
[2025-02-27 22:52:49,274][00667] Num frames 6400...
[2025-02-27 22:52:49,455][00667] Num frames 6500...
[2025-02-27 22:52:49,626][00667] Num frames 6600...
[2025-02-27 22:52:49,795][00667] Num frames 6700...
[2025-02-27 22:52:49,967][00667] Num frames 6800...
[2025-02-27 22:52:50,155][00667] Num frames 6900...
[2025-02-27 22:52:50,329][00667] Num frames 7000...
[2025-02-27 22:52:50,527][00667] Num frames 7100...
[2025-02-27 22:52:50,710][00667] Num frames 7200...
[2025-02-27 22:52:50,859][00667] Num frames 7300...
[2025-02-27 22:52:50,916][00667] Avg episode rewards: #0: 21.859, true rewards: #0: 10.430
[2025-02-27 22:52:50,917][00667] Avg episode reward: 21.859, avg true_objective: 10.430
[2025-02-27 22:52:51,044][00667] Num frames 7400...
[2025-02-27 22:52:51,172][00667] Num frames 7500...
[2025-02-27 22:52:51,303][00667] Num frames 7600...
[2025-02-27 22:52:51,436][00667] Num frames 7700...
[2025-02-27 22:52:51,569][00667] Num frames 7800...
[2025-02-27 22:52:51,681][00667] Avg episode rewards: #0: 20.056, true rewards: #0: 9.806
[2025-02-27 22:52:51,681][00667] Avg episode reward: 20.056, avg true_objective: 9.806
[2025-02-27 22:52:51,752][00667] Num frames 7900...
[2025-02-27 22:52:51,880][00667] Num frames 8000...
[2025-02-27 22:52:52,007][00667] Num frames 8100...
[2025-02-27 22:52:52,134][00667] Num frames 8200...
[2025-02-27 22:52:52,260][00667] Num frames 8300...
[2025-02-27 22:52:52,389][00667] Num frames 8400...
[2025-02-27 22:52:52,476][00667] Avg episode rewards: #0: 19.135, true rewards: #0: 9.357
[2025-02-27 22:52:52,477][00667] Avg episode reward: 19.135, avg true_objective: 9.357
[2025-02-27 22:52:52,577][00667] Num frames 8500...
[2025-02-27 22:52:52,705][00667] Num frames 8600...
[2025-02-27 22:52:52,831][00667] Num frames 8700...
[2025-02-27 22:52:52,958][00667] Num frames 8800...
[2025-02-27 22:52:53,089][00667] Num frames 8900...
[2025-02-27 22:52:53,217][00667] Num frames 9000...
[2025-02-27 22:52:53,346][00667] Num frames 9100...
[2025-02-27 22:52:53,490][00667] Num frames 9200...
[2025-02-27 22:52:53,631][00667] Num frames 9300...
[2025-02-27 22:52:53,763][00667] Num frames 9400...
[2025-02-27 22:52:53,890][00667] Num frames 9500...
[2025-02-27 22:52:54,021][00667] Num frames 9600...
[2025-02-27 22:52:54,152][00667] Num frames 9700...
[2025-02-27 22:52:54,287][00667] Num frames 9800...
[2025-02-27 22:52:54,422][00667] Num frames 9900...
[2025-02-27 22:52:54,568][00667] Avg episode rewards: #0: 20.954, true rewards: #0: 9.954
[2025-02-27 22:52:54,569][00667] Avg episode reward: 20.954, avg true_objective: 9.954
[2025-02-27 22:53:54,917][00667] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2025-02-27 22:53:55,420][00667] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2025-02-27 22:53:55,421][00667] Overriding arg 'num_workers' with value 1 passed from command line
[2025-02-27 22:53:55,422][00667] Adding new argument 'no_render'=True that is not in the saved config file!
[2025-02-27 22:53:55,423][00667] Adding new argument 'save_video'=True that is not in the saved config file!
[2025-02-27 22:53:55,423][00667] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2025-02-27 22:53:55,424][00667] Adding new argument 'video_name'=None that is not in the saved config file!
[2025-02-27 22:53:55,425][00667] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2025-02-27 22:53:55,425][00667] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2025-02-27 22:53:55,426][00667] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2025-02-27 22:53:55,427][00667] Adding new argument 'hf_repository'='amostof/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2025-02-27 22:53:55,427][00667] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2025-02-27 22:53:55,428][00667] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2025-02-27 22:53:55,429][00667] Adding new argument 'train_script'=None that is not in the saved config file!
[2025-02-27 22:53:55,429][00667] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2025-02-27 22:53:55,430][00667] Using frameskip 1 and render_action_repeat=4 for evaluation
[2025-02-27 22:53:55,465][00667] RunningMeanStd input shape: (3, 72, 128)
[2025-02-27 22:53:55,466][00667] RunningMeanStd input shape: (1,)
[2025-02-27 22:53:55,488][00667] ConvEncoder: input_channels=3
[2025-02-27 22:53:55,554][00667] Conv encoder output size: 512
[2025-02-27 22:53:55,555][00667] Policy head output size: 512
[2025-02-27 22:53:55,581][00667] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2025-02-27 22:53:56,256][00667] Num frames 100...
[2025-02-27 22:53:56,416][00667] Num frames 200...
[2025-02-27 22:53:56,620][00667] Num frames 300...
[2025-02-27 22:53:56,795][00667] Num frames 400...
[2025-02-27 22:53:56,964][00667] Num frames 500...
[2025-02-27 22:53:57,129][00667] Avg episode rewards: #0: 10.700, true rewards: #0: 5.700
[2025-02-27 22:53:57,130][00667] Avg episode reward: 10.700, avg true_objective: 5.700
[2025-02-27 22:53:57,179][00667] Num frames 600...
[2025-02-27 22:53:57,337][00667] Num frames 700...
[2025-02-27 22:53:57,493][00667] Num frames 800...
[2025-02-27 22:53:57,654][00667] Num frames 900...
[2025-02-27 22:53:57,830][00667] Num frames 1000...
[2025-02-27 22:53:58,002][00667] Num frames 1100...
[2025-02-27 22:53:58,167][00667] Num frames 1200...
[2025-02-27 22:53:58,368][00667] Avg episode rewards: #0: 10.870, true rewards: #0: 6.370
[2025-02-27 22:53:58,369][00667] Avg episode reward: 10.870, avg true_objective: 6.370
[2025-02-27 22:53:58,402][00667] Num frames 1300...
[2025-02-27 22:53:58,543][00667] Num frames 1400...
[2025-02-27 22:53:58,670][00667] Num frames 1500...
[2025-02-27 22:53:58,795][00667] Num frames 1600...
[2025-02-27 22:53:58,926][00667] Num frames 1700...
[2025-02-27 22:53:59,086][00667] Avg episode rewards: #0: 9.620, true rewards: #0: 5.953
[2025-02-27 22:53:59,087][00667] Avg episode reward: 9.620, avg true_objective: 5.953
[2025-02-27 22:53:59,106][00667] Num frames 1800...
[2025-02-27 22:53:59,230][00667] Num frames 1900...
[2025-02-27 22:53:59,353][00667] Num frames 2000...
[2025-02-27 22:53:59,483][00667] Num frames 2100...
[2025-02-27 22:53:59,613][00667] Num frames 2200...
[2025-02-27 22:53:59,740][00667] Num frames 2300...
[2025-02-27 22:53:59,870][00667] Num frames 2400...
[2025-02-27 22:54:00,005][00667] Num frames 2500...
[2025-02-27 22:54:00,120][00667] Avg episode rewards: #0: 11.115, true rewards: #0: 6.365
[2025-02-27 22:54:00,121][00667] Avg episode reward: 11.115, avg true_objective: 6.365
[2025-02-27 22:54:00,190][00667] Num frames 2600...
[2025-02-27 22:54:00,320][00667] Num frames 2700...
[2025-02-27 22:54:00,445][00667] Num frames 2800...
[2025-02-27 22:54:00,576][00667] Num frames 2900...
[2025-02-27 22:54:00,704][00667] Num frames 3000...
[2025-02-27 22:54:00,833][00667] Num frames 3100...
[2025-02-27 22:54:00,968][00667] Num frames 3200...
[2025-02-27 22:54:01,101][00667] Num frames 3300...
[2025-02-27 22:54:01,228][00667] Num frames 3400...
[2025-02-27 22:54:01,358][00667] Num frames 3500...
[2025-02-27 22:54:01,484][00667] Num frames 3600...
[2025-02-27 22:54:01,613][00667] Num frames 3700...
[2025-02-27 22:54:01,745][00667] Num frames 3800...
[2025-02-27 22:54:01,873][00667] Num frames 3900...
[2025-02-27 22:54:02,012][00667] Num frames 4000...
[2025-02-27 22:54:02,139][00667] Num frames 4100...
[2025-02-27 22:54:02,281][00667] Num frames 4200...
[2025-02-27 22:54:02,415][00667] Num frames 4300...
[2025-02-27 22:54:02,564][00667] Avg episode rewards: #0: 18.140, true rewards: #0: 8.740
[2025-02-27 22:54:02,565][00667] Avg episode reward: 18.140, avg true_objective: 8.740
[2025-02-27 22:54:02,605][00667] Num frames 4400...
[2025-02-27 22:54:02,733][00667] Num frames 4500...
[2025-02-27 22:54:02,859][00667] Num frames 4600...
[2025-02-27 22:54:02,996][00667] Num frames 4700...
[2025-02-27 22:54:03,078][00667] Avg episode rewards: #0: 15.870, true rewards: #0: 7.870
[2025-02-27 22:54:03,079][00667] Avg episode reward: 15.870, avg true_objective: 7.870
[2025-02-27 22:54:03,177][00667] Num frames 4800...
[2025-02-27 22:54:03,304][00667] Num frames 4900...
[2025-02-27 22:54:03,433][00667] Num frames 5000...
[2025-02-27 22:54:03,565][00667] Num frames 5100...
[2025-02-27 22:54:03,690][00667] Num frames 5200...
[2025-02-27 22:54:03,821][00667] Num frames 5300...
[2025-02-27 22:54:03,914][00667] Avg episode rewards: #0: 15.471, true rewards: #0: 7.614
[2025-02-27 22:54:03,915][00667] Avg episode reward: 15.471, avg true_objective: 7.614
[2025-02-27 22:54:04,015][00667] Num frames 5400...
[2025-02-27 22:54:04,142][00667] Num frames 5500...
[2025-02-27 22:54:04,269][00667] Num frames 5600...
[2025-02-27 22:54:04,399][00667] Num frames 5700...
[2025-02-27 22:54:04,529][00667] Num frames 5800...
[2025-02-27 22:54:04,660][00667] Num frames 5900...
[2025-02-27 22:54:04,842][00667] Num frames 6000...
[2025-02-27 22:54:05,021][00667] Num frames 6100...
[2025-02-27 22:54:05,190][00667] Num frames 6200...
[2025-02-27 22:54:05,360][00667] Num frames 6300...
[2025-02-27 22:54:05,533][00667] Num frames 6400...
[2025-02-27 22:54:05,704][00667] Num frames 6500...
[2025-02-27 22:54:05,877][00667] Num frames 6600...
[2025-02-27 22:54:06,086][00667] Num frames 6700...
[2025-02-27 22:54:06,265][00667] Num frames 6800...
[2025-02-27 22:54:06,436][00667] Num frames 6900...
[2025-02-27 22:54:06,622][00667] Num frames 7000...
[2025-02-27 22:54:06,809][00667] Num frames 7100...
[2025-02-27 22:54:06,902][00667] Avg episode rewards: #0: 19.902, true rewards: #0: 8.902
[2025-02-27 22:54:06,903][00667] Avg episode reward: 19.902, avg true_objective: 8.902
[2025-02-27 22:54:07,004][00667] Num frames 7200...
[2025-02-27 22:54:07,141][00667] Num frames 7300...
[2025-02-27 22:54:07,269][00667] Num frames 7400...
[2025-02-27 22:54:07,397][00667] Num frames 7500...
[2025-02-27 22:54:07,528][00667] Num frames 7600...
[2025-02-27 22:54:07,658][00667] Num frames 7700...
[2025-02-27 22:54:07,788][00667] Num frames 7800...
[2025-02-27 22:54:07,913][00667] Num frames 7900...
[2025-02-27 22:54:07,998][00667] Avg episode rewards: #0: 19.247, true rewards: #0: 8.802
[2025-02-27 22:54:07,999][00667] Avg episode reward: 19.247, avg true_objective: 8.802
[2025-02-27 22:54:08,098][00667] Num frames 8000...
[2025-02-27 22:54:08,230][00667] Num frames 8100...
[2025-02-27 22:54:08,361][00667] Num frames 8200...
[2025-02-27 22:54:08,492][00667] Num frames 8300...
[2025-02-27 22:54:08,619][00667] Num frames 8400...
[2025-02-27 22:54:08,746][00667] Num frames 8500...
[2025-02-27 22:54:08,876][00667] Num frames 8600...
[2025-02-27 22:54:09,048][00667] Avg episode rewards: #0: 18.890, true rewards: #0: 8.690
[2025-02-27 22:54:09,049][00667] Avg episode reward: 18.890, avg true_objective: 8.690
[2025-02-27 22:55:01,465][00667] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2025-02-27 23:09:38,530][00667] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2025-02-27 23:09:38,532][00667] Overriding arg 'num_workers' with value 1 passed from command line
[2025-02-27 23:09:38,532][00667] Adding new argument 'no_render'=True that is not in the saved config file!
[2025-02-27 23:09:38,533][00667] Adding new argument 'save_video'=True that is not in the saved config file!
[2025-02-27 23:09:38,534][00667] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2025-02-27 23:09:38,535][00667] Adding new argument 'video_name'=None that is not in the saved config file!
[2025-02-27 23:09:38,536][00667] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2025-02-27 23:09:38,541][00667] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2025-02-27 23:09:38,542][00667] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2025-02-27 23:09:38,543][00667] Adding new argument 'hf_repository'='amostof/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2025-02-27 23:09:38,544][00667] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2025-02-27 23:09:38,546][00667] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2025-02-27 23:09:38,547][00667] Adding new argument 'train_script'=None that is not in the saved config file!
[2025-02-27 23:09:38,548][00667] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2025-02-27 23:09:38,548][00667] Using frameskip 1 and render_action_repeat=4 for evaluation
[2025-02-27 23:09:38,596][00667] RunningMeanStd input shape: (3, 72, 128)
[2025-02-27 23:09:38,599][00667] RunningMeanStd input shape: (1,)
[2025-02-27 23:09:38,616][00667] ConvEncoder: input_channels=3
[2025-02-27 23:09:38,672][00667] Conv encoder output size: 512
[2025-02-27 23:09:38,673][00667] Policy head output size: 512
[2025-02-27 23:09:38,701][00667] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2025-02-27 23:09:39,382][00667] Num frames 100...
[2025-02-27 23:09:39,559][00667] Num frames 200...
[2025-02-27 23:09:39,726][00667] Num frames 300...
[2025-02-27 23:09:39,904][00667] Num frames 400...
[2025-02-27 23:09:40,092][00667] Num frames 500...
[2025-02-27 23:09:40,279][00667] Num frames 600...
[2025-02-27 23:09:40,469][00667] Num frames 700...
[2025-02-27 23:09:40,633][00667] Num frames 800...
[2025-02-27 23:09:40,767][00667] Num frames 900...
[2025-02-27 23:09:40,897][00667] Num frames 1000...
[2025-02-27 23:09:41,033][00667] Num frames 1100...
[2025-02-27 23:09:41,115][00667] Avg episode rewards: #0: 27.200, true rewards: #0: 11.200
[2025-02-27 23:09:41,116][00667] Avg episode reward: 27.200, avg true_objective: 11.200
[2025-02-27 23:09:41,229][00667] Num frames 1200...
[2025-02-27 23:09:41,369][00667] Num frames 1300...
[2025-02-27 23:09:41,506][00667] Num frames 1400...
[2025-02-27 23:09:41,641][00667] Num frames 1500...
[2025-02-27 23:09:41,770][00667] Num frames 1600...
[2025-02-27 23:09:41,902][00667] Num frames 1700...
[2025-02-27 23:09:42,034][00667] Num frames 1800...
[2025-02-27 23:09:42,163][00667] Num frames 1900...
[2025-02-27 23:09:42,292][00667] Num frames 2000...
[2025-02-27 23:09:42,429][00667] Num frames 2100...
[2025-02-27 23:09:42,545][00667] Avg episode rewards: #0: 23.720, true rewards: #0: 10.720
[2025-02-27 23:09:42,546][00667] Avg episode reward: 23.720, avg true_objective: 10.720
[2025-02-27 23:09:42,620][00667] Num frames 2200...
[2025-02-27 23:09:42,751][00667] Num frames 2300...
[2025-02-27 23:09:42,884][00667] Num frames 2400...
[2025-02-27 23:09:43,026][00667] Num frames 2500...
[2025-02-27 23:09:43,160][00667] Num frames 2600...
[2025-02-27 23:09:43,287][00667] Num frames 2700...
[2025-02-27 23:09:43,457][00667] Avg episode rewards: #0: 20.277, true rewards: #0: 9.277
[2025-02-27 23:09:43,458][00667] Avg episode reward: 20.277, avg true_objective: 9.277
[2025-02-27 23:09:43,486][00667] Num frames 2800...
[2025-02-27 23:09:43,619][00667] Num frames 2900...
[2025-02-27 23:09:43,749][00667] Num frames 3000...
[2025-02-27 23:09:43,874][00667] Num frames 3100...
[2025-02-27 23:09:44,012][00667] Num frames 3200...
[2025-02-27 23:09:44,142][00667] Num frames 3300...
[2025-02-27 23:09:44,269][00667] Num frames 3400...
[2025-02-27 23:09:44,405][00667] Num frames 3500...
[2025-02-27 23:09:44,527][00667] Avg episode rewards: #0: 18.878, true rewards: #0: 8.877
[2025-02-27 23:09:44,528][00667] Avg episode reward: 18.878, avg true_objective: 8.877
[2025-02-27 23:09:44,596][00667] Num frames 3600...
[2025-02-27 23:09:44,726][00667] Num frames 3700...
[2025-02-27 23:09:44,854][00667] Num frames 3800...
[2025-02-27 23:09:44,982][00667] Num frames 3900...
[2025-02-27 23:09:45,112][00667] Num frames 4000...
[2025-02-27 23:09:45,244][00667] Num frames 4100...
[2025-02-27 23:09:45,373][00667] Num frames 4200...
[2025-02-27 23:09:45,512][00667] Num frames 4300...
[2025-02-27 23:09:45,680][00667] Avg episode rewards: #0: 18.570, true rewards: #0: 8.770
[2025-02-27 23:09:45,682][00667] Avg episode reward: 18.570, avg true_objective: 8.770
[2025-02-27 23:09:45,704][00667] Num frames 4400...
[2025-02-27 23:09:45,841][00667] Num frames 4500...
[2025-02-27 23:09:45,977][00667] Num frames 4600...
[2025-02-27 23:09:46,104][00667] Num frames 4700...
[2025-02-27 23:09:46,230][00667] Num frames 4800...
[2025-02-27 23:09:46,361][00667] Num frames 4900...
[2025-02-27 23:09:46,498][00667] Num frames 5000...
[2025-02-27 23:09:46,628][00667] Num frames 5100...
[2025-02-27 23:09:46,759][00667] Num frames 5200...
[2025-02-27 23:09:46,886][00667] Num frames 5300...
[2025-02-27 23:09:47,015][00667] Num frames 5400...
[2025-02-27 23:09:47,147][00667] Num frames 5500...
[2025-02-27 23:09:47,277][00667] Num frames 5600...
[2025-02-27 23:09:47,408][00667] Num frames 5700...
[2025-02-27 23:09:47,549][00667] Num frames 5800...
[2025-02-27 23:09:47,681][00667] Num frames 5900...
[2025-02-27 23:09:47,810][00667] Num frames 6000...
[2025-02-27 23:09:47,947][00667] Num frames 6100...
[2025-02-27 23:09:48,078][00667] Num frames 6200...
[2025-02-27 23:09:48,260][00667] Avg episode rewards: #0: 24.663, true rewards: #0: 10.497
[2025-02-27 23:09:48,261][00667] Avg episode reward: 24.663, avg true_objective: 10.497
[2025-02-27 23:09:48,266][00667] Num frames 6300...
[2025-02-27 23:09:48,396][00667] Num frames 6400...
[2025-02-27 23:09:48,540][00667] Num frames 6500...
[2025-02-27 23:09:48,671][00667] Num frames 6600...
[2025-02-27 23:09:48,803][00667] Num frames 6700...
[2025-02-27 23:09:48,937][00667] Num frames 6800...
[2025-02-27 23:09:49,069][00667] Num frames 6900...
[2025-02-27 23:09:49,194][00667] Num frames 7000...
[2025-02-27 23:09:49,325][00667] Num frames 7100...
[2025-02-27 23:09:49,456][00667] Num frames 7200...
[2025-02-27 23:09:49,594][00667] Num frames 7300...
[2025-02-27 23:09:49,726][00667] Num frames 7400...
[2025-02-27 23:09:49,854][00667] Num frames 7500...
[2025-02-27 23:09:49,986][00667] Num frames 7600...
[2025-02-27 23:09:50,118][00667] Num frames 7700...
[2025-02-27 23:09:50,246][00667] Num frames 7800...
[2025-02-27 23:09:50,378][00667] Num frames 7900...
[2025-02-27 23:09:50,473][00667] Avg episode rewards: #0: 26.757, true rewards: #0: 11.329
[2025-02-27 23:09:50,474][00667] Avg episode reward: 26.757, avg true_objective: 11.329
[2025-02-27 23:09:50,577][00667] Num frames 8000...
[2025-02-27 23:09:50,758][00667] Num frames 8100...
[2025-02-27 23:09:50,930][00667] Num frames 8200...
[2025-02-27 23:09:51,100][00667] Num frames 8300...
[2025-02-27 23:09:51,285][00667] Avg episode rewards: #0: 24.347, true rewards: #0: 10.472
[2025-02-27 23:09:51,288][00667] Avg episode reward: 24.347, avg true_objective: 10.472
[2025-02-27 23:09:51,328][00667] Num frames 8400...
[2025-02-27 23:09:51,501][00667] Num frames 8500...
[2025-02-27 23:09:51,678][00667] Num frames 8600...
[2025-02-27 23:09:51,854][00667] Num frames 8700...
[2025-02-27 23:09:52,040][00667] Num frames 8800...
[2025-02-27 23:09:52,218][00667] Num frames 8900...
[2025-02-27 23:09:52,402][00667] Num frames 9000...
[2025-02-27 23:09:52,594][00667] Num frames 9100...
[2025-02-27 23:09:52,766][00667] Num frames 9200...
[2025-02-27 23:09:52,893][00667] Num frames 9300...
[2025-02-27 23:09:53,040][00667] Avg episode rewards: #0: 23.967, true rewards: #0: 10.411
[2025-02-27 23:09:53,042][00667] Avg episode reward: 23.967, avg true_objective: 10.411
[2025-02-27 23:09:53,083][00667] Num frames 9400...
[2025-02-27 23:09:53,212][00667] Num frames 9500...
[2025-02-27 23:09:53,342][00667] Num frames 9600...
[2025-02-27 23:09:53,470][00667] Num frames 9700...
[2025-02-27 23:09:53,604][00667] Num frames 9800...
[2025-02-27 23:09:53,740][00667] Num frames 9900...
[2025-02-27 23:09:53,867][00667] Num frames 10000...
[2025-02-27 23:09:53,997][00667] Num frames 10100...
[2025-02-27 23:09:54,125][00667] Num frames 10200...
[2025-02-27 23:09:54,253][00667] Num frames 10300...
[2025-02-27 23:09:54,379][00667] Num frames 10400...
[2025-02-27 23:09:54,511][00667] Num frames 10500...
[2025-02-27 23:09:54,638][00667] Num frames 10600...
[2025-02-27 23:09:54,772][00667] Num frames 10700...
[2025-02-27 23:09:54,903][00667] Num frames 10800...
[2025-02-27 23:09:55,033][00667] Num frames 10900...
[2025-02-27 23:09:55,185][00667] Avg episode rewards: #0: 24.774, true rewards: #0: 10.974
[2025-02-27 23:09:55,186][00667] Avg episode reward: 24.774, avg true_objective: 10.974
[2025-02-27 23:11:03,009][00667] Replay video saved to /content/train_dir/default_experiment/replay.mp4!