|
[2025-03-10 14:58:47,504][00323] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-03-10 14:58:47,506][00323] Rollout worker 0 uses device cpu |
|
[2025-03-10 14:58:47,507][00323] Rollout worker 1 uses device cpu |
|
[2025-03-10 14:58:47,508][00323] Rollout worker 2 uses device cpu |
|
[2025-03-10 14:58:47,509][00323] Rollout worker 3 uses device cpu |
|
[2025-03-10 14:58:47,509][00323] Rollout worker 4 uses device cpu |
|
[2025-03-10 14:58:47,510][00323] Rollout worker 5 uses device cpu |
|
[2025-03-10 14:58:47,511][00323] Rollout worker 6 uses device cpu |
|
[2025-03-10 14:58:47,512][00323] Rollout worker 7 uses device cpu |
|
[2025-03-10 14:58:47,678][00323] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-10 14:58:47,679][00323] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-03-10 14:58:47,811][00323] Starting all processes... |
|
[2025-03-10 14:58:47,812][00323] Starting process learner_proc0 |
|
[2025-03-10 14:58:47,869][00323] Starting all processes... |
|
[2025-03-10 14:58:47,880][00323] Starting process inference_proc0-0 |
|
[2025-03-10 14:58:47,880][00323] Starting process rollout_proc0 |
|
[2025-03-10 14:58:47,880][00323] Starting process rollout_proc1 |
|
[2025-03-10 14:58:47,880][00323] Starting process rollout_proc2 |
|
[2025-03-10 14:58:47,880][00323] Starting process rollout_proc3 |
|
[2025-03-10 14:58:47,880][00323] Starting process rollout_proc4 |
|
[2025-03-10 14:58:47,880][00323] Starting process rollout_proc5 |
|
[2025-03-10 14:58:47,880][00323] Starting process rollout_proc6 |
|
[2025-03-10 14:58:47,880][00323] Starting process rollout_proc7 |
|
[2025-03-10 14:59:03,555][02728] Worker 1 uses CPU cores [1] |
|
[2025-03-10 14:59:03,556][02733] Worker 6 uses CPU cores [0] |
|
[2025-03-10 14:59:03,719][02726] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-10 14:59:03,723][02726] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-03-10 14:59:03,754][02713] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-10 14:59:03,757][02713] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-03-10 14:59:03,840][02713] Num visible devices: 1 |
|
[2025-03-10 14:59:03,849][02726] Num visible devices: 1 |
|
[2025-03-10 14:59:03,852][02734] Worker 7 uses CPU cores [1] |
|
[2025-03-10 14:59:03,871][02713] Starting seed is not provided |
|
[2025-03-10 14:59:03,872][02713] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-10 14:59:03,873][02713] Initializing actor-critic model on device cuda:0 |
|
[2025-03-10 14:59:03,874][02713] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-10 14:59:03,877][02713] RunningMeanStd input shape: (1,) |
|
[2025-03-10 14:59:03,911][02732] Worker 5 uses CPU cores [1] |
|
[2025-03-10 14:59:03,959][02731] Worker 4 uses CPU cores [0] |
|
[2025-03-10 14:59:03,986][02730] Worker 2 uses CPU cores [0] |
|
[2025-03-10 14:59:03,998][02727] Worker 0 uses CPU cores [0] |
|
[2025-03-10 14:59:04,047][02729] Worker 3 uses CPU cores [1] |
|
[2025-03-10 14:59:04,051][02713] ConvEncoder: input_channels=3 |
|
[2025-03-10 14:59:04,396][02713] Conv encoder output size: 512 |
|
[2025-03-10 14:59:04,396][02713] Policy head output size: 512 |
|
[2025-03-10 14:59:04,471][02713] Created Actor Critic model with architecture: |
|
[2025-03-10 14:59:04,472][02713] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-03-10 14:59:04,800][02713] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-03-10 14:59:07,670][00323] Heartbeat connected on Batcher_0 |
|
[2025-03-10 14:59:07,678][00323] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-03-10 14:59:07,785][00323] Heartbeat connected on RolloutWorker_w0 |
|
[2025-03-10 14:59:07,789][00323] Heartbeat connected on RolloutWorker_w1 |
|
[2025-03-10 14:59:07,793][00323] Heartbeat connected on RolloutWorker_w2 |
|
[2025-03-10 14:59:07,798][00323] Heartbeat connected on RolloutWorker_w3 |
|
[2025-03-10 14:59:07,800][00323] Heartbeat connected on RolloutWorker_w4 |
|
[2025-03-10 14:59:07,807][00323] Heartbeat connected on RolloutWorker_w6 |
|
[2025-03-10 14:59:07,809][00323] Heartbeat connected on RolloutWorker_w5 |
|
[2025-03-10 14:59:07,811][00323] Heartbeat connected on RolloutWorker_w7 |
|
[2025-03-10 14:59:09,720][02713] No checkpoints found |
|
[2025-03-10 14:59:09,721][02713] Did not load from checkpoint, starting from scratch! |
|
[2025-03-10 14:59:09,721][02713] Initialized policy 0 weights for model version 0 |
|
[2025-03-10 14:59:09,724][02713] LearnerWorker_p0 finished initialization! |
|
[2025-03-10 14:59:09,725][00323] Heartbeat connected on LearnerWorker_p0 |
|
[2025-03-10 14:59:09,725][02713] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-03-10 14:59:09,886][02726] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-10 14:59:09,887][02726] RunningMeanStd input shape: (1,) |
|
[2025-03-10 14:59:09,958][02726] ConvEncoder: input_channels=3 |
|
[2025-03-10 14:59:10,060][02726] Conv encoder output size: 512 |
|
[2025-03-10 14:59:10,060][02726] Policy head output size: 512 |
|
[2025-03-10 14:59:10,096][00323] Inference worker 0-0 is ready! |
|
[2025-03-10 14:59:10,097][00323] All inference workers are ready! Signal rollout workers to start! |
|
[2025-03-10 14:59:10,372][02727] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-10 14:59:10,400][02732] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-10 14:59:10,424][02733] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-10 14:59:10,432][02731] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-10 14:59:10,457][02730] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-10 14:59:10,514][02728] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-10 14:59:10,574][02729] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-10 14:59:10,599][02734] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-10 14:59:12,126][00323] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-03-10 14:59:12,249][02727] Decorrelating experience for 0 frames... |
|
[2025-03-10 14:59:12,250][02733] Decorrelating experience for 0 frames... |
|
[2025-03-10 14:59:12,251][02730] Decorrelating experience for 0 frames... |
|
[2025-03-10 14:59:12,252][02731] Decorrelating experience for 0 frames... |
|
[2025-03-10 14:59:12,249][02732] Decorrelating experience for 0 frames... |
|
[2025-03-10 14:59:12,251][02734] Decorrelating experience for 0 frames... |
|
[2025-03-10 14:59:12,250][02729] Decorrelating experience for 0 frames... |
|
[2025-03-10 14:59:13,212][02731] Decorrelating experience for 32 frames... |
|
[2025-03-10 14:59:13,216][02733] Decorrelating experience for 32 frames... |
|
[2025-03-10 14:59:13,218][02730] Decorrelating experience for 32 frames... |
|
[2025-03-10 14:59:13,221][02727] Decorrelating experience for 32 frames... |
|
[2025-03-10 14:59:14,335][02734] Decorrelating experience for 32 frames... |
|
[2025-03-10 14:59:14,571][02732] Decorrelating experience for 32 frames... |
|
[2025-03-10 14:59:14,609][02729] Decorrelating experience for 32 frames... |
|
[2025-03-10 14:59:14,898][02727] Decorrelating experience for 64 frames... |
|
[2025-03-10 14:59:14,900][02733] Decorrelating experience for 64 frames... |
|
[2025-03-10 14:59:14,896][02731] Decorrelating experience for 64 frames... |
|
[2025-03-10 14:59:15,888][02728] Decorrelating experience for 0 frames... |
|
[2025-03-10 14:59:16,397][02729] Decorrelating experience for 64 frames... |
|
[2025-03-10 14:59:16,399][02732] Decorrelating experience for 64 frames... |
|
[2025-03-10 14:59:17,126][00323] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-03-10 14:59:17,457][02730] Decorrelating experience for 64 frames... |
|
[2025-03-10 14:59:17,520][02728] Decorrelating experience for 32 frames... |
|
[2025-03-10 14:59:17,637][02731] Decorrelating experience for 96 frames... |
|
[2025-03-10 14:59:17,640][02733] Decorrelating experience for 96 frames... |
|
[2025-03-10 14:59:17,676][02727] Decorrelating experience for 96 frames... |
|
[2025-03-10 14:59:18,495][02729] Decorrelating experience for 96 frames... |
|
[2025-03-10 14:59:19,572][02734] Decorrelating experience for 64 frames... |
|
[2025-03-10 14:59:20,296][02728] Decorrelating experience for 64 frames... |
|
[2025-03-10 14:59:20,785][02732] Decorrelating experience for 96 frames... |
|
[2025-03-10 14:59:22,131][00323] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 121.5. Samples: 1216. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-03-10 14:59:22,132][00323] Avg episode reward: [(0, '1.920')] |
|
[2025-03-10 14:59:22,765][02730] Decorrelating experience for 96 frames... |
|
[2025-03-10 14:59:22,803][02734] Decorrelating experience for 96 frames... |
|
[2025-03-10 14:59:23,746][02728] Decorrelating experience for 96 frames... |
|
[2025-03-10 14:59:24,705][02713] Signal inference workers to stop experience collection... |
|
[2025-03-10 14:59:24,731][02726] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-03-10 14:59:26,221][02713] Signal inference workers to resume experience collection... |
|
[2025-03-10 14:59:26,222][02726] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-03-10 14:59:27,126][00323] Fps is (10 sec: 819.2, 60 sec: 546.1, 300 sec: 546.1). Total num frames: 8192. Throughput: 0: 188.1. Samples: 2822. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2025-03-10 14:59:27,128][00323] Avg episode reward: [(0, '2.966')] |
|
[2025-03-10 14:59:32,126][00323] Fps is (10 sec: 3278.3, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 32768. Throughput: 0: 414.6. Samples: 8292. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 14:59:32,130][00323] Avg episode reward: [(0, '3.830')] |
|
[2025-03-10 14:59:34,727][02726] Updated weights for policy 0, policy_version 10 (0.0014) |
|
[2025-03-10 14:59:37,126][00323] Fps is (10 sec: 4096.1, 60 sec: 1966.1, 300 sec: 1966.1). Total num frames: 49152. Throughput: 0: 523.0. Samples: 13074. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 14:59:37,130][00323] Avg episode reward: [(0, '4.185')] |
|
[2025-03-10 14:59:42,126][00323] Fps is (10 sec: 3686.4, 60 sec: 2321.1, 300 sec: 2321.1). Total num frames: 69632. Throughput: 0: 551.9. Samples: 16558. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 14:59:42,130][00323] Avg episode reward: [(0, '4.378')] |
|
[2025-03-10 14:59:44,294][02726] Updated weights for policy 0, policy_version 20 (0.0020) |
|
[2025-03-10 14:59:47,126][00323] Fps is (10 sec: 4096.0, 60 sec: 2574.6, 300 sec: 2574.6). Total num frames: 90112. Throughput: 0: 648.3. Samples: 22692. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 14:59:47,130][00323] Avg episode reward: [(0, '4.481')] |
|
[2025-03-10 14:59:52,126][00323] Fps is (10 sec: 3276.8, 60 sec: 2560.0, 300 sec: 2560.0). Total num frames: 102400. Throughput: 0: 666.8. Samples: 26674. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 14:59:52,128][00323] Avg episode reward: [(0, '4.398')] |
|
[2025-03-10 14:59:52,133][02713] Saving new best policy, reward=4.398! |
|
[2025-03-10 14:59:56,224][02726] Updated weights for policy 0, policy_version 30 (0.0020) |
|
[2025-03-10 14:59:57,126][00323] Fps is (10 sec: 3686.4, 60 sec: 2821.7, 300 sec: 2821.7). Total num frames: 126976. Throughput: 0: 668.7. Samples: 30092. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 14:59:57,130][00323] Avg episode reward: [(0, '4.411')] |
|
[2025-03-10 14:59:57,134][02713] Saving new best policy, reward=4.411! |
|
[2025-03-10 15:00:02,126][00323] Fps is (10 sec: 4505.6, 60 sec: 2949.1, 300 sec: 2949.1). Total num frames: 147456. Throughput: 0: 822.1. Samples: 36996. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:00:02,132][00323] Avg episode reward: [(0, '4.409')] |
|
[2025-03-10 15:00:06,792][02726] Updated weights for policy 0, policy_version 40 (0.0013) |
|
[2025-03-10 15:00:07,126][00323] Fps is (10 sec: 3686.4, 60 sec: 2978.9, 300 sec: 2978.9). Total num frames: 163840. Throughput: 0: 900.9. Samples: 41752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:00:07,130][00323] Avg episode reward: [(0, '4.393')] |
|
[2025-03-10 15:00:12,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3072.0, 300 sec: 3072.0). Total num frames: 184320. Throughput: 0: 942.4. Samples: 45228. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:00:12,130][00323] Avg episode reward: [(0, '4.472')] |
|
[2025-03-10 15:00:12,136][02713] Saving new best policy, reward=4.472! |
|
[2025-03-10 15:00:15,914][02726] Updated weights for policy 0, policy_version 50 (0.0021) |
|
[2025-03-10 15:00:17,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3150.8). Total num frames: 204800. Throughput: 0: 971.9. Samples: 52028. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:00:17,134][00323] Avg episode reward: [(0, '4.530')] |
|
[2025-03-10 15:00:17,171][02713] Saving new best policy, reward=4.530! |
|
[2025-03-10 15:00:22,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3686.7, 300 sec: 3159.8). Total num frames: 221184. Throughput: 0: 968.4. Samples: 56654. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:00:22,131][00323] Avg episode reward: [(0, '4.471')] |
|
[2025-03-10 15:00:26,724][02726] Updated weights for policy 0, policy_version 60 (0.0028) |
|
[2025-03-10 15:00:27,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3276.8). Total num frames: 245760. Throughput: 0: 967.5. Samples: 60096. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:00:27,130][00323] Avg episode reward: [(0, '4.405')] |
|
[2025-03-10 15:00:32,126][00323] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3328.0). Total num frames: 266240. Throughput: 0: 983.5. Samples: 66950. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:00:32,128][00323] Avg episode reward: [(0, '4.364')] |
|
[2025-03-10 15:00:37,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3325.0). Total num frames: 282624. Throughput: 0: 999.5. Samples: 71650. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:00:37,128][00323] Avg episode reward: [(0, '4.370')] |
|
[2025-03-10 15:00:37,715][02726] Updated weights for policy 0, policy_version 70 (0.0019) |
|
[2025-03-10 15:00:42,126][00323] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3367.8). Total num frames: 303104. Throughput: 0: 997.2. Samples: 74966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:00:42,130][00323] Avg episode reward: [(0, '4.362')] |
|
[2025-03-10 15:00:42,137][02713] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000074_303104.pth... |
|
[2025-03-10 15:00:47,127][00323] Fps is (10 sec: 4095.8, 60 sec: 3891.2, 300 sec: 3406.1). Total num frames: 323584. Throughput: 0: 985.4. Samples: 81340. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:00:47,130][00323] Avg episode reward: [(0, '4.326')] |
|
[2025-03-10 15:00:47,757][02726] Updated weights for policy 0, policy_version 80 (0.0015) |
|
[2025-03-10 15:00:52,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3399.7). Total num frames: 339968. Throughput: 0: 979.9. Samples: 85846. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:00:52,130][00323] Avg episode reward: [(0, '4.310')] |
|
[2025-03-10 15:00:57,126][00323] Fps is (10 sec: 3686.6, 60 sec: 3891.2, 300 sec: 3432.8). Total num frames: 360448. Throughput: 0: 976.7. Samples: 89180. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:00:57,130][00323] Avg episode reward: [(0, '4.486')] |
|
[2025-03-10 15:00:58,157][02726] Updated weights for policy 0, policy_version 90 (0.0029) |
|
[2025-03-10 15:01:02,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3463.0). Total num frames: 380928. Throughput: 0: 976.2. Samples: 95958. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-10 15:01:02,130][00323] Avg episode reward: [(0, '4.499')] |
|
[2025-03-10 15:01:07,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3454.9). Total num frames: 397312. Throughput: 0: 974.4. Samples: 100504. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:01:07,130][00323] Avg episode reward: [(0, '4.473')] |
|
[2025-03-10 15:01:09,249][02726] Updated weights for policy 0, policy_version 100 (0.0017) |
|
[2025-03-10 15:01:12,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3515.7). Total num frames: 421888. Throughput: 0: 974.3. Samples: 103938. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:01:12,130][00323] Avg episode reward: [(0, '4.591')] |
|
[2025-03-10 15:01:12,136][02713] Saving new best policy, reward=4.591! |
|
[2025-03-10 15:01:17,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3506.2). Total num frames: 438272. Throughput: 0: 966.3. Samples: 110434. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:01:17,130][00323] Avg episode reward: [(0, '4.499')] |
|
[2025-03-10 15:01:20,256][02726] Updated weights for policy 0, policy_version 110 (0.0031) |
|
[2025-03-10 15:01:22,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3497.4). Total num frames: 454656. Throughput: 0: 968.7. Samples: 115240. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:01:22,130][00323] Avg episode reward: [(0, '4.514')] |
|
[2025-03-10 15:01:27,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3549.9). Total num frames: 479232. Throughput: 0: 970.4. Samples: 118636. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:01:27,127][00323] Avg episode reward: [(0, '4.624')] |
|
[2025-03-10 15:01:27,129][02713] Saving new best policy, reward=4.624! |
|
[2025-03-10 15:01:29,487][02726] Updated weights for policy 0, policy_version 120 (0.0015) |
|
[2025-03-10 15:01:32,127][00323] Fps is (10 sec: 4505.3, 60 sec: 3891.2, 300 sec: 3569.4). Total num frames: 499712. Throughput: 0: 978.4. Samples: 125368. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:01:32,128][00323] Avg episode reward: [(0, '4.650')] |
|
[2025-03-10 15:01:32,138][02713] Saving new best policy, reward=4.650! |
|
[2025-03-10 15:01:37,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3559.3). Total num frames: 516096. Throughput: 0: 980.0. Samples: 129944. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:01:37,129][00323] Avg episode reward: [(0, '4.659')] |
|
[2025-03-10 15:01:37,132][02713] Saving new best policy, reward=4.659! |
|
[2025-03-10 15:01:40,501][02726] Updated weights for policy 0, policy_version 130 (0.0021) |
|
[2025-03-10 15:01:42,126][00323] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3577.2). Total num frames: 536576. Throughput: 0: 978.5. Samples: 133214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:01:42,129][00323] Avg episode reward: [(0, '4.639')] |
|
[2025-03-10 15:01:47,127][00323] Fps is (10 sec: 4095.7, 60 sec: 3891.2, 300 sec: 3593.9). Total num frames: 557056. Throughput: 0: 975.1. Samples: 139840. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:01:47,128][00323] Avg episode reward: [(0, '4.459')] |
|
[2025-03-10 15:01:51,482][02726] Updated weights for policy 0, policy_version 140 (0.0020) |
|
[2025-03-10 15:01:52,126][00323] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3584.0). Total num frames: 573440. Throughput: 0: 981.2. Samples: 144658. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:01:52,129][00323] Avg episode reward: [(0, '4.265')] |
|
[2025-03-10 15:01:57,126][00323] Fps is (10 sec: 4096.3, 60 sec: 3959.5, 300 sec: 3624.3). Total num frames: 598016. Throughput: 0: 980.2. Samples: 148048. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:01:57,127][00323] Avg episode reward: [(0, '4.288')] |
|
[2025-03-10 15:02:00,457][02726] Updated weights for policy 0, policy_version 150 (0.0023) |
|
[2025-03-10 15:02:02,129][00323] Fps is (10 sec: 4504.4, 60 sec: 3959.3, 300 sec: 3638.2). Total num frames: 618496. Throughput: 0: 981.7. Samples: 154612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:02:02,133][00323] Avg episode reward: [(0, '4.331')] |
|
[2025-03-10 15:02:07,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3627.9). Total num frames: 634880. Throughput: 0: 991.2. Samples: 159844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:02:07,131][00323] Avg episode reward: [(0, '4.516')] |
|
[2025-03-10 15:02:11,065][02726] Updated weights for policy 0, policy_version 160 (0.0016) |
|
[2025-03-10 15:02:12,126][00323] Fps is (10 sec: 4097.0, 60 sec: 3959.5, 300 sec: 3663.6). Total num frames: 659456. Throughput: 0: 994.9. Samples: 163406. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:02:12,131][00323] Avg episode reward: [(0, '4.708')] |
|
[2025-03-10 15:02:12,139][02713] Saving new best policy, reward=4.708! |
|
[2025-03-10 15:02:17,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3653.2). Total num frames: 675840. Throughput: 0: 985.0. Samples: 169694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:02:17,131][00323] Avg episode reward: [(0, '4.610')] |
|
[2025-03-10 15:02:21,929][02726] Updated weights for policy 0, policy_version 170 (0.0012) |
|
[2025-03-10 15:02:22,131][00323] Fps is (10 sec: 3684.8, 60 sec: 4027.4, 300 sec: 3664.8). Total num frames: 696320. Throughput: 0: 998.3. Samples: 174872. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:02:22,132][00323] Avg episode reward: [(0, '4.729')] |
|
[2025-03-10 15:02:22,142][02713] Saving new best policy, reward=4.729! |
|
[2025-03-10 15:02:27,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3675.9). Total num frames: 716800. Throughput: 0: 1001.1. Samples: 178264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:02:27,131][00323] Avg episode reward: [(0, '4.713')] |
|
[2025-03-10 15:02:31,711][02726] Updated weights for policy 0, policy_version 180 (0.0019) |
|
[2025-03-10 15:02:32,126][00323] Fps is (10 sec: 4097.8, 60 sec: 3959.5, 300 sec: 3686.4). Total num frames: 737280. Throughput: 0: 992.0. Samples: 184480. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:02:32,128][00323] Avg episode reward: [(0, '4.463')] |
|
[2025-03-10 15:02:37,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3676.4). Total num frames: 753664. Throughput: 0: 1002.5. Samples: 189772. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:02:37,130][00323] Avg episode reward: [(0, '4.646')] |
|
[2025-03-10 15:02:41,729][02726] Updated weights for policy 0, policy_version 190 (0.0012) |
|
[2025-03-10 15:02:42,126][00323] Fps is (10 sec: 4096.1, 60 sec: 4027.8, 300 sec: 3705.9). Total num frames: 778240. Throughput: 0: 1004.0. Samples: 193226. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:02:42,130][00323] Avg episode reward: [(0, '5.020')] |
|
[2025-03-10 15:02:42,138][02713] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000190_778240.pth... |
|
[2025-03-10 15:02:42,278][02713] Saving new best policy, reward=5.020! |
|
[2025-03-10 15:02:47,130][00323] Fps is (10 sec: 4094.3, 60 sec: 3959.2, 300 sec: 3695.9). Total num frames: 794624. Throughput: 0: 993.3. Samples: 199314. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:02:47,134][00323] Avg episode reward: [(0, '4.968')] |
|
[2025-03-10 15:02:52,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3705.0). Total num frames: 815104. Throughput: 0: 997.9. Samples: 204748. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:02:52,129][00323] Avg episode reward: [(0, '4.727')] |
|
[2025-03-10 15:02:52,649][02726] Updated weights for policy 0, policy_version 200 (0.0024) |
|
[2025-03-10 15:02:57,126][00323] Fps is (10 sec: 4097.7, 60 sec: 3959.5, 300 sec: 3713.7). Total num frames: 835584. Throughput: 0: 994.4. Samples: 208154. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:02:57,129][00323] Avg episode reward: [(0, '4.482')] |
|
[2025-03-10 15:03:02,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.6, 300 sec: 3722.0). Total num frames: 856064. Throughput: 0: 989.8. Samples: 214236. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:03:02,135][00323] Avg episode reward: [(0, '4.586')] |
|
[2025-03-10 15:03:03,028][02726] Updated weights for policy 0, policy_version 210 (0.0012) |
|
[2025-03-10 15:03:07,126][00323] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3730.0). Total num frames: 876544. Throughput: 0: 1000.0. Samples: 219866. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:03:07,127][00323] Avg episode reward: [(0, '4.976')] |
|
[2025-03-10 15:03:12,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3737.6). Total num frames: 897024. Throughput: 0: 1002.9. Samples: 223394. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:03:12,127][00323] Avg episode reward: [(0, '4.874')] |
|
[2025-03-10 15:03:12,224][02726] Updated weights for policy 0, policy_version 220 (0.0016) |
|
[2025-03-10 15:03:17,129][00323] Fps is (10 sec: 4094.9, 60 sec: 4027.6, 300 sec: 3744.9). Total num frames: 917504. Throughput: 0: 998.6. Samples: 229420. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:03:17,130][00323] Avg episode reward: [(0, '4.745')] |
|
[2025-03-10 15:03:22,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.8, 300 sec: 3735.6). Total num frames: 933888. Throughput: 0: 1009.6. Samples: 235204. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:03:22,133][00323] Avg episode reward: [(0, '4.896')] |
|
[2025-03-10 15:03:23,226][02726] Updated weights for policy 0, policy_version 230 (0.0032) |
|
[2025-03-10 15:03:27,126][00323] Fps is (10 sec: 4097.1, 60 sec: 4027.7, 300 sec: 3758.7). Total num frames: 958464. Throughput: 0: 1003.5. Samples: 238384. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:03:27,131][00323] Avg episode reward: [(0, '5.047')] |
|
[2025-03-10 15:03:27,135][02713] Saving new best policy, reward=5.047! |
|
[2025-03-10 15:03:32,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3733.7). Total num frames: 970752. Throughput: 0: 987.6. Samples: 243754. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:03:32,129][00323] Avg episode reward: [(0, '5.183')] |
|
[2025-03-10 15:03:32,223][02713] Saving new best policy, reward=5.183! |
|
[2025-03-10 15:03:34,743][02726] Updated weights for policy 0, policy_version 240 (0.0027) |
|
[2025-03-10 15:03:37,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3740.5). Total num frames: 991232. Throughput: 0: 984.2. Samples: 249036. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:03:37,130][00323] Avg episode reward: [(0, '4.961')] |
|
[2025-03-10 15:03:42,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3747.1). Total num frames: 1011712. Throughput: 0: 982.3. Samples: 252356. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:03:42,128][00323] Avg episode reward: [(0, '5.072')] |
|
[2025-03-10 15:03:44,105][02726] Updated weights for policy 0, policy_version 250 (0.0015) |
|
[2025-03-10 15:03:47,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.5, 300 sec: 3738.5). Total num frames: 1028096. Throughput: 0: 970.4. Samples: 257904. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:03:47,129][00323] Avg episode reward: [(0, '5.347')] |
|
[2025-03-10 15:03:47,133][02713] Saving new best policy, reward=5.347! |
|
[2025-03-10 15:03:52,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3759.5). Total num frames: 1052672. Throughput: 0: 975.2. Samples: 263748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:03:52,131][00323] Avg episode reward: [(0, '5.447')] |
|
[2025-03-10 15:03:52,139][02713] Saving new best policy, reward=5.447! |
|
[2025-03-10 15:03:55,148][02726] Updated weights for policy 0, policy_version 260 (0.0012) |
|
[2025-03-10 15:03:57,126][00323] Fps is (10 sec: 4505.4, 60 sec: 3959.4, 300 sec: 3765.4). Total num frames: 1073152. Throughput: 0: 961.5. Samples: 266664. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:03:57,128][00323] Avg episode reward: [(0, '5.839')] |
|
[2025-03-10 15:03:57,129][02713] Saving new best policy, reward=5.839! |
|
[2025-03-10 15:04:02,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3742.9). Total num frames: 1085440. Throughput: 0: 946.8. Samples: 272024. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:04:02,127][00323] Avg episode reward: [(0, '5.688')] |
|
[2025-03-10 15:04:06,595][02726] Updated weights for policy 0, policy_version 270 (0.0030) |
|
[2025-03-10 15:04:07,126][00323] Fps is (10 sec: 3276.9, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 1105920. Throughput: 0: 944.9. Samples: 277726. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:04:07,130][00323] Avg episode reward: [(0, '5.833')] |
|
[2025-03-10 15:04:12,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1130496. Throughput: 0: 948.9. Samples: 281086. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:04:12,130][00323] Avg episode reward: [(0, '6.565')] |
|
[2025-03-10 15:04:12,137][02713] Saving new best policy, reward=6.565! |
|
[2025-03-10 15:04:17,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3754.8, 300 sec: 3873.9). Total num frames: 1142784. Throughput: 0: 946.6. Samples: 286350. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:04:17,130][00323] Avg episode reward: [(0, '6.477')] |
|
[2025-03-10 15:04:17,441][02726] Updated weights for policy 0, policy_version 280 (0.0021) |
|
[2025-03-10 15:04:22,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 1163264. Throughput: 0: 961.6. Samples: 292310. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:04:22,130][00323] Avg episode reward: [(0, '6.917')] |
|
[2025-03-10 15:04:22,136][02713] Saving new best policy, reward=6.917! |
|
[2025-03-10 15:04:26,835][02726] Updated weights for policy 0, policy_version 290 (0.0019) |
|
[2025-03-10 15:04:27,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 1187840. Throughput: 0: 958.4. Samples: 295484. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:04:27,127][00323] Avg episode reward: [(0, '7.400')] |
|
[2025-03-10 15:04:27,130][02713] Saving new best policy, reward=7.400! |
|
[2025-03-10 15:04:32,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 1200128. Throughput: 0: 949.4. Samples: 300626. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:04:32,130][00323] Avg episode reward: [(0, '7.751')] |
|
[2025-03-10 15:04:32,143][02713] Saving new best policy, reward=7.751! |
|
[2025-03-10 15:04:37,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 1220608. Throughput: 0: 954.7. Samples: 306708. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:04:37,127][00323] Avg episode reward: [(0, '8.128')] |
|
[2025-03-10 15:04:37,131][02713] Saving new best policy, reward=8.128! |
|
[2025-03-10 15:04:38,289][02726] Updated weights for policy 0, policy_version 300 (0.0018) |
|
[2025-03-10 15:04:42,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 1245184. Throughput: 0: 961.3. Samples: 309920. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:04:42,127][00323] Avg episode reward: [(0, '7.933')] |
|
[2025-03-10 15:04:42,134][02713] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000304_1245184.pth... |
|
[2025-03-10 15:04:42,265][02713] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000074_303104.pth |
|
[2025-03-10 15:04:47,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 1257472. Throughput: 0: 954.1. Samples: 314960. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:04:47,127][00323] Avg episode reward: [(0, '7.742')] |
|
[2025-03-10 15:04:49,340][02726] Updated weights for policy 0, policy_version 310 (0.0018) |
|
[2025-03-10 15:04:52,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3901.6). Total num frames: 1277952. Throughput: 0: 889.8. Samples: 317768. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:04:52,128][00323] Avg episode reward: [(0, '7.961')] |
|
[2025-03-10 15:04:57,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3901.6). Total num frames: 1298432. Throughput: 0: 955.1. Samples: 324064. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:04:57,127][00323] Avg episode reward: [(0, '8.016')] |
|
[2025-03-10 15:04:59,980][02726] Updated weights for policy 0, policy_version 320 (0.0028) |
|
[2025-03-10 15:05:02,129][00323] Fps is (10 sec: 3685.5, 60 sec: 3822.8, 300 sec: 3901.6). Total num frames: 1314816. Throughput: 0: 949.1. Samples: 329062. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:05:02,130][00323] Avg episode reward: [(0, '8.522')] |
|
[2025-03-10 15:05:02,140][02713] Saving new best policy, reward=8.522! |
|
[2025-03-10 15:05:07,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 1335296. Throughput: 0: 952.8. Samples: 335188. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:05:07,130][00323] Avg episode reward: [(0, '8.451')] |
|
[2025-03-10 15:05:10,219][02726] Updated weights for policy 0, policy_version 330 (0.0024) |
|
[2025-03-10 15:05:12,126][00323] Fps is (10 sec: 4506.7, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 1359872. Throughput: 0: 957.8. Samples: 338584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:05:12,133][00323] Avg episode reward: [(0, '8.633')] |
|
[2025-03-10 15:05:12,143][02713] Saving new best policy, reward=8.633! |
|
[2025-03-10 15:05:17,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 1372160. Throughput: 0: 951.2. Samples: 343432. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-03-10 15:05:17,127][00323] Avg episode reward: [(0, '8.187')] |
|
[2025-03-10 15:05:21,296][02726] Updated weights for policy 0, policy_version 340 (0.0016) |
|
[2025-03-10 15:05:22,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 1392640. Throughput: 0: 955.0. Samples: 349684. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:05:22,130][00323] Avg episode reward: [(0, '8.572')] |
|
[2025-03-10 15:05:27,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 1417216. Throughput: 0: 954.0. Samples: 352852. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-03-10 15:05:27,127][00323] Avg episode reward: [(0, '8.014')] |
|
[2025-03-10 15:05:32,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 1429504. Throughput: 0: 946.0. Samples: 357532. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:05:32,129][00323] Avg episode reward: [(0, '7.876')] |
|
[2025-03-10 15:05:32,757][02726] Updated weights for policy 0, policy_version 350 (0.0013) |
|
[2025-03-10 15:05:37,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 1449984. Throughput: 0: 1024.0. Samples: 363850. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:05:37,130][00323] Avg episode reward: [(0, '8.043')] |
|
[2025-03-10 15:05:41,940][02726] Updated weights for policy 0, policy_version 360 (0.0017) |
|
[2025-03-10 15:05:42,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 1474560. Throughput: 0: 959.0. Samples: 367218. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:05:42,130][00323] Avg episode reward: [(0, '8.488')] |
|
[2025-03-10 15:05:47,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 1486848. Throughput: 0: 956.0. Samples: 372078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:05:47,127][00323] Avg episode reward: [(0, '9.654')] |
|
[2025-03-10 15:05:47,129][02713] Saving new best policy, reward=9.654! |
|
[2025-03-10 15:05:52,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 1511424. Throughput: 0: 960.8. Samples: 378422. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:05:52,129][00323] Avg episode reward: [(0, '9.651')] |
|
[2025-03-10 15:05:52,951][02726] Updated weights for policy 0, policy_version 370 (0.0024) |
|
[2025-03-10 15:05:57,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 1531904. Throughput: 0: 955.7. Samples: 381590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:05:57,132][00323] Avg episode reward: [(0, '10.281')] |
|
[2025-03-10 15:05:57,135][02713] Saving new best policy, reward=10.281! |
|
[2025-03-10 15:06:02,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3823.1, 300 sec: 3887.7). Total num frames: 1544192. Throughput: 0: 952.1. Samples: 386278. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:06:02,130][00323] Avg episode reward: [(0, '9.610')] |
|
[2025-03-10 15:06:04,214][02726] Updated weights for policy 0, policy_version 380 (0.0015) |
|
[2025-03-10 15:06:07,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1568768. Throughput: 0: 959.8. Samples: 392874. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:06:07,130][00323] Avg episode reward: [(0, '9.227')] |
|
[2025-03-10 15:06:12,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 1589248. Throughput: 0: 961.1. Samples: 396102. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:06:12,129][00323] Avg episode reward: [(0, '9.523')] |
|
[2025-03-10 15:06:14,836][02726] Updated weights for policy 0, policy_version 390 (0.0022) |
|
[2025-03-10 15:06:17,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 1605632. Throughput: 0: 964.0. Samples: 400914. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:06:17,127][00323] Avg episode reward: [(0, '9.966')] |
|
[2025-03-10 15:06:22,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1626112. Throughput: 0: 978.7. Samples: 407890. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:06:22,130][00323] Avg episode reward: [(0, '10.305')] |
|
[2025-03-10 15:06:22,137][02713] Saving new best policy, reward=10.305! |
|
[2025-03-10 15:06:24,065][02726] Updated weights for policy 0, policy_version 400 (0.0024) |
|
[2025-03-10 15:06:27,133][00323] Fps is (10 sec: 4093.3, 60 sec: 3822.5, 300 sec: 3887.6). Total num frames: 1646592. Throughput: 0: 978.2. Samples: 411244. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:06:27,134][00323] Avg episode reward: [(0, '9.593')] |
|
[2025-03-10 15:06:32,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1662976. Throughput: 0: 977.9. Samples: 416084. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:06:32,128][00323] Avg episode reward: [(0, '10.256')] |
|
[2025-03-10 15:06:34,923][02726] Updated weights for policy 0, policy_version 410 (0.0018) |
|
[2025-03-10 15:06:37,126][00323] Fps is (10 sec: 4098.7, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1687552. Throughput: 0: 987.4. Samples: 422854. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:06:37,127][00323] Avg episode reward: [(0, '11.331')] |
|
[2025-03-10 15:06:37,131][02713] Saving new best policy, reward=11.331! |
|
[2025-03-10 15:06:42,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 1708032. Throughput: 0: 990.7. Samples: 426172. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:06:42,128][00323] Avg episode reward: [(0, '11.644')] |
|
[2025-03-10 15:06:42,135][02713] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000417_1708032.pth... |
|
[2025-03-10 15:06:42,304][02713] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000190_778240.pth |
|
[2025-03-10 15:06:42,320][02713] Saving new best policy, reward=11.644! |
|
[2025-03-10 15:06:45,963][02726] Updated weights for policy 0, policy_version 420 (0.0023) |
|
[2025-03-10 15:06:47,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1724416. Throughput: 0: 987.7. Samples: 430724. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-10 15:06:47,131][00323] Avg episode reward: [(0, '11.316')] |
|
[2025-03-10 15:06:52,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1744896. Throughput: 0: 994.6. Samples: 437630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:06:52,130][00323] Avg episode reward: [(0, '10.976')] |
|
[2025-03-10 15:06:55,153][02726] Updated weights for policy 0, policy_version 430 (0.0022) |
|
[2025-03-10 15:06:57,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.8). Total num frames: 1765376. Throughput: 0: 996.4. Samples: 440940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:06:57,130][00323] Avg episode reward: [(0, '12.400')] |
|
[2025-03-10 15:06:57,134][02713] Saving new best policy, reward=12.400! |
|
[2025-03-10 15:07:02,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1781760. Throughput: 0: 990.7. Samples: 445494. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:07:02,128][00323] Avg episode reward: [(0, '12.079')] |
|
[2025-03-10 15:07:06,115][02726] Updated weights for policy 0, policy_version 440 (0.0026) |
|
[2025-03-10 15:07:07,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1806336. Throughput: 0: 986.1. Samples: 452266. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:07:07,129][00323] Avg episode reward: [(0, '12.106')] |
|
[2025-03-10 15:07:12,127][00323] Fps is (10 sec: 4095.8, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1822720. Throughput: 0: 988.9. Samples: 455740. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:07:12,128][00323] Avg episode reward: [(0, '12.267')] |
|
[2025-03-10 15:07:16,845][02726] Updated weights for policy 0, policy_version 450 (0.0018) |
|
[2025-03-10 15:07:17,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.8). Total num frames: 1843200. Throughput: 0: 987.4. Samples: 460518. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:07:17,127][00323] Avg episode reward: [(0, '11.674')] |
|
[2025-03-10 15:07:22,126][00323] Fps is (10 sec: 4505.8, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 1867776. Throughput: 0: 996.5. Samples: 467698. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:07:22,127][00323] Avg episode reward: [(0, '12.551')] |
|
[2025-03-10 15:07:22,132][02713] Saving new best policy, reward=12.551! |
|
[2025-03-10 15:07:25,890][02726] Updated weights for policy 0, policy_version 460 (0.0012) |
|
[2025-03-10 15:07:27,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.9, 300 sec: 3887.7). Total num frames: 1884160. Throughput: 0: 996.7. Samples: 471024. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:07:27,130][00323] Avg episode reward: [(0, '12.024')] |
|
[2025-03-10 15:07:32,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 1904640. Throughput: 0: 997.7. Samples: 475620. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:07:32,131][00323] Avg episode reward: [(0, '12.086')] |
|
[2025-03-10 15:07:36,552][02726] Updated weights for policy 0, policy_version 470 (0.0020) |
|
[2025-03-10 15:07:37,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1925120. Throughput: 0: 996.6. Samples: 482478. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:07:37,128][00323] Avg episode reward: [(0, '12.822')] |
|
[2025-03-10 15:07:37,135][02713] Saving new best policy, reward=12.822! |
|
[2025-03-10 15:07:42,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.7). Total num frames: 1945600. Throughput: 0: 992.4. Samples: 485598. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:07:42,132][00323] Avg episode reward: [(0, '12.498')] |
|
[2025-03-10 15:07:47,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1961984. Throughput: 0: 990.6. Samples: 490072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:07:47,127][00323] Avg episode reward: [(0, '13.930')] |
|
[2025-03-10 15:07:47,129][02713] Saving new best policy, reward=13.930! |
|
[2025-03-10 15:07:47,990][02726] Updated weights for policy 0, policy_version 480 (0.0016) |
|
[2025-03-10 15:07:52,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1982464. Throughput: 0: 985.5. Samples: 496614. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:07:52,127][00323] Avg episode reward: [(0, '13.971')] |
|
[2025-03-10 15:07:52,134][02713] Saving new best policy, reward=13.971! |
|
[2025-03-10 15:07:57,129][00323] Fps is (10 sec: 3685.5, 60 sec: 3891.0, 300 sec: 3873.8). Total num frames: 1998848. Throughput: 0: 977.6. Samples: 499732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:07:57,130][00323] Avg episode reward: [(0, '14.480')] |
|
[2025-03-10 15:07:57,132][02713] Saving new best policy, reward=14.480! |
|
[2025-03-10 15:07:59,266][02726] Updated weights for policy 0, policy_version 490 (0.0030) |
|
[2025-03-10 15:08:02,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2019328. Throughput: 0: 972.0. Samples: 504258. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:08:02,130][00323] Avg episode reward: [(0, '14.147')] |
|
[2025-03-10 15:08:07,126][00323] Fps is (10 sec: 4097.1, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2039808. Throughput: 0: 964.1. Samples: 511082. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:08:07,127][00323] Avg episode reward: [(0, '14.148')] |
|
[2025-03-10 15:08:08,407][02726] Updated weights for policy 0, policy_version 500 (0.0013) |
|
[2025-03-10 15:08:12,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3873.9). Total num frames: 2060288. Throughput: 0: 965.0. Samples: 514450. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:08:12,127][00323] Avg episode reward: [(0, '14.675')] |
|
[2025-03-10 15:08:12,137][02713] Saving new best policy, reward=14.675! |
|
[2025-03-10 15:08:17,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2076672. Throughput: 0: 968.8. Samples: 519216. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:08:17,131][00323] Avg episode reward: [(0, '15.334')] |
|
[2025-03-10 15:08:17,135][02713] Saving new best policy, reward=15.334! |
|
[2025-03-10 15:08:19,249][02726] Updated weights for policy 0, policy_version 510 (0.0013) |
|
[2025-03-10 15:08:22,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2101248. Throughput: 0: 970.4. Samples: 526144. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:08:22,129][00323] Avg episode reward: [(0, '15.915')] |
|
[2025-03-10 15:08:22,138][02713] Saving new best policy, reward=15.906! |
|
[2025-03-10 15:08:27,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2117632. Throughput: 0: 971.6. Samples: 529322. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:08:27,127][00323] Avg episode reward: [(0, '16.174')] |
|
[2025-03-10 15:08:27,129][02713] Saving new best policy, reward=16.174! |
|
[2025-03-10 15:08:30,276][02726] Updated weights for policy 0, policy_version 520 (0.0022) |
|
[2025-03-10 15:08:32,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2138112. Throughput: 0: 978.6. Samples: 534108. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:08:32,131][00323] Avg episode reward: [(0, '15.671')] |
|
[2025-03-10 15:08:37,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2158592. Throughput: 0: 990.0. Samples: 541164. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:08:37,134][00323] Avg episode reward: [(0, '15.947')] |
|
[2025-03-10 15:08:38,932][02726] Updated weights for policy 0, policy_version 530 (0.0018) |
|
[2025-03-10 15:08:42,127][00323] Fps is (10 sec: 4095.7, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2179072. Throughput: 0: 989.6. Samples: 544262. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:08:42,128][00323] Avg episode reward: [(0, '16.090')] |
|
[2025-03-10 15:08:42,135][02713] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000532_2179072.pth... |
|
[2025-03-10 15:08:42,327][02713] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000304_1245184.pth |
|
[2025-03-10 15:08:47,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2199552. Throughput: 0: 1003.4. Samples: 549412. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:08:47,127][00323] Avg episode reward: [(0, '17.160')] |
|
[2025-03-10 15:08:47,129][02713] Saving new best policy, reward=17.160! |
|
[2025-03-10 15:08:49,674][02726] Updated weights for policy 0, policy_version 540 (0.0013) |
|
[2025-03-10 15:08:52,126][00323] Fps is (10 sec: 4096.3, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2220032. Throughput: 0: 1004.8. Samples: 556300. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:08:52,127][00323] Avg episode reward: [(0, '17.315')] |
|
[2025-03-10 15:08:52,132][02713] Saving new best policy, reward=17.315! |
|
[2025-03-10 15:08:57,126][00323] Fps is (10 sec: 4096.0, 60 sec: 4027.9, 300 sec: 3915.5). Total num frames: 2240512. Throughput: 0: 996.4. Samples: 559290. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:08:57,128][00323] Avg episode reward: [(0, '16.986')] |
|
[2025-03-10 15:09:00,399][02726] Updated weights for policy 0, policy_version 550 (0.0012) |
|
[2025-03-10 15:09:02,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2256896. Throughput: 0: 1008.3. Samples: 564590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:09:02,128][00323] Avg episode reward: [(0, '17.321')] |
|
[2025-03-10 15:09:02,215][02713] Saving new best policy, reward=17.321! |
|
[2025-03-10 15:09:07,126][00323] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 2281472. Throughput: 0: 1004.5. Samples: 571346. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:09:07,127][00323] Avg episode reward: [(0, '17.656')] |
|
[2025-03-10 15:09:07,129][02713] Saving new best policy, reward=17.656! |
|
[2025-03-10 15:09:09,610][02726] Updated weights for policy 0, policy_version 560 (0.0016) |
|
[2025-03-10 15:09:12,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 2297856. Throughput: 0: 1000.6. Samples: 574348. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:09:12,131][00323] Avg episode reward: [(0, '19.243')] |
|
[2025-03-10 15:09:12,137][02713] Saving new best policy, reward=19.243! |
|
[2025-03-10 15:09:17,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 2318336. Throughput: 0: 1010.7. Samples: 579588. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:09:17,130][00323] Avg episode reward: [(0, '19.798')] |
|
[2025-03-10 15:09:17,135][02713] Saving new best policy, reward=19.798! |
|
[2025-03-10 15:09:20,108][02726] Updated weights for policy 0, policy_version 570 (0.0020) |
|
[2025-03-10 15:09:22,126][00323] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 2342912. Throughput: 0: 1007.6. Samples: 586506. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:09:22,130][00323] Avg episode reward: [(0, '19.666')] |
|
[2025-03-10 15:09:27,126][00323] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3929.4). Total num frames: 2359296. Throughput: 0: 1004.6. Samples: 589468. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:09:27,129][00323] Avg episode reward: [(0, '18.584')] |
|
[2025-03-10 15:09:30,855][02726] Updated weights for policy 0, policy_version 580 (0.0044) |
|
[2025-03-10 15:09:32,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3929.4). Total num frames: 2379776. Throughput: 0: 1006.4. Samples: 594698. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:09:32,127][00323] Avg episode reward: [(0, '17.054')] |
|
[2025-03-10 15:09:37,126][00323] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3929.4). Total num frames: 2404352. Throughput: 0: 1010.9. Samples: 601790. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:09:37,130][00323] Avg episode reward: [(0, '16.444')] |
|
[2025-03-10 15:09:40,163][02726] Updated weights for policy 0, policy_version 590 (0.0018) |
|
[2025-03-10 15:09:42,127][00323] Fps is (10 sec: 4095.7, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2420736. Throughput: 0: 1007.0. Samples: 604604. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:09:42,130][00323] Avg episode reward: [(0, '16.138')] |
|
[2025-03-10 15:09:47,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2441216. Throughput: 0: 1008.9. Samples: 609990. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:09:47,127][00323] Avg episode reward: [(0, '18.915')] |
|
[2025-03-10 15:09:50,514][02726] Updated weights for policy 0, policy_version 600 (0.0031) |
|
[2025-03-10 15:09:52,126][00323] Fps is (10 sec: 4096.3, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2461696. Throughput: 0: 1011.6. Samples: 616868. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:09:52,127][00323] Avg episode reward: [(0, '20.009')] |
|
[2025-03-10 15:09:52,136][02713] Saving new best policy, reward=20.009! |
|
[2025-03-10 15:09:57,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2478080. Throughput: 0: 1004.2. Samples: 619536. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:09:57,130][00323] Avg episode reward: [(0, '20.312')] |
|
[2025-03-10 15:09:57,134][02713] Saving new best policy, reward=20.312! |
|
[2025-03-10 15:10:01,454][02726] Updated weights for policy 0, policy_version 610 (0.0031) |
|
[2025-03-10 15:10:02,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2498560. Throughput: 0: 1008.0. Samples: 624948. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-03-10 15:10:02,127][00323] Avg episode reward: [(0, '20.315')] |
|
[2025-03-10 15:10:02,135][02713] Saving new best policy, reward=20.315! |
|
[2025-03-10 15:10:07,126][00323] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2523136. Throughput: 0: 1002.3. Samples: 631608. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:10:07,127][00323] Avg episode reward: [(0, '18.572')] |
|
[2025-03-10 15:10:12,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2535424. Throughput: 0: 988.3. Samples: 633940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:10:12,132][00323] Avg episode reward: [(0, '17.584')] |
|
[2025-03-10 15:10:12,597][02726] Updated weights for policy 0, policy_version 620 (0.0035) |
|
[2025-03-10 15:10:17,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2555904. Throughput: 0: 988.7. Samples: 639190. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:10:17,130][00323] Avg episode reward: [(0, '17.413')] |
|
[2025-03-10 15:10:21,943][02726] Updated weights for policy 0, policy_version 630 (0.0022) |
|
[2025-03-10 15:10:22,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 2580480. Throughput: 0: 979.5. Samples: 645866. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:10:22,129][00323] Avg episode reward: [(0, '18.406')] |
|
[2025-03-10 15:10:27,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 2592768. Throughput: 0: 972.8. Samples: 648378. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:10:27,131][00323] Avg episode reward: [(0, '18.623')] |
|
[2025-03-10 15:10:32,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2617344. Throughput: 0: 976.8. Samples: 653948. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:10:32,130][00323] Avg episode reward: [(0, '17.381')] |
|
[2025-03-10 15:10:32,904][02726] Updated weights for policy 0, policy_version 640 (0.0023) |
|
[2025-03-10 15:10:37,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 2637824. Throughput: 0: 901.6. Samples: 657442. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:10:37,132][00323] Avg episode reward: [(0, '18.249')] |
|
[2025-03-10 15:10:42,129][00323] Fps is (10 sec: 3685.4, 60 sec: 3891.1, 300 sec: 3957.1). Total num frames: 2654208. Throughput: 0: 969.1. Samples: 663150. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:10:42,134][00323] Avg episode reward: [(0, '18.076')] |
|
[2025-03-10 15:10:42,142][02713] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000648_2654208.pth... |
|
[2025-03-10 15:10:42,318][02713] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000417_1708032.pth |
|
[2025-03-10 15:10:43,775][02726] Updated weights for policy 0, policy_version 650 (0.0019) |
|
[2025-03-10 15:10:47,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 2674688. Throughput: 0: 976.2. Samples: 668878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:10:47,127][00323] Avg episode reward: [(0, '18.243')] |
|
[2025-03-10 15:10:52,126][00323] Fps is (10 sec: 4506.8, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2699264. Throughput: 0: 976.1. Samples: 675532. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:10:52,127][00323] Avg episode reward: [(0, '17.851')] |
|
[2025-03-10 15:10:53,452][02726] Updated weights for policy 0, policy_version 660 (0.0012) |
|
[2025-03-10 15:10:57,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 2711552. Throughput: 0: 970.9. Samples: 677630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:10:57,132][00323] Avg episode reward: [(0, '17.895')] |
|
[2025-03-10 15:11:02,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 2732032. Throughput: 0: 981.2. Samples: 683344. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-10 15:11:02,131][00323] Avg episode reward: [(0, '18.199')] |
|
[2025-03-10 15:11:04,278][02726] Updated weights for policy 0, policy_version 670 (0.0017) |
|
[2025-03-10 15:11:07,130][00323] Fps is (10 sec: 4504.0, 60 sec: 3891.0, 300 sec: 3957.1). Total num frames: 2756608. Throughput: 0: 979.6. Samples: 689952. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:11:07,131][00323] Avg episode reward: [(0, '20.195')] |
|
[2025-03-10 15:11:12,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 2768896. Throughput: 0: 971.4. Samples: 692092. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:11:12,127][00323] Avg episode reward: [(0, '20.315')] |
|
[2025-03-10 15:11:15,018][02726] Updated weights for policy 0, policy_version 680 (0.0026) |
|
[2025-03-10 15:11:17,126][00323] Fps is (10 sec: 3687.7, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2793472. Throughput: 0: 984.6. Samples: 698256. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:11:17,133][00323] Avg episode reward: [(0, '20.243')] |
|
[2025-03-10 15:11:22,126][00323] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3971.1). Total num frames: 2818048. Throughput: 0: 1057.0. Samples: 705006. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-10 15:11:22,127][00323] Avg episode reward: [(0, '20.007')] |
|
[2025-03-10 15:11:25,070][02726] Updated weights for policy 0, policy_version 690 (0.0014) |
|
[2025-03-10 15:11:27,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2830336. Throughput: 0: 975.8. Samples: 707058. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-10 15:11:27,127][00323] Avg episode reward: [(0, '19.906')] |
|
[2025-03-10 15:11:32,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2854912. Throughput: 0: 988.1. Samples: 713344. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:11:32,127][00323] Avg episode reward: [(0, '19.044')] |
|
[2025-03-10 15:11:34,613][02726] Updated weights for policy 0, policy_version 700 (0.0014) |
|
[2025-03-10 15:11:37,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2875392. Throughput: 0: 987.4. Samples: 719964. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:11:37,128][00323] Avg episode reward: [(0, '19.984')] |
|
[2025-03-10 15:11:42,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3957.2). Total num frames: 2891776. Throughput: 0: 984.8. Samples: 721946. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:11:42,127][00323] Avg episode reward: [(0, '19.852')] |
|
[2025-03-10 15:11:45,449][02726] Updated weights for policy 0, policy_version 710 (0.0013) |
|
[2025-03-10 15:11:47,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2912256. Throughput: 0: 1001.1. Samples: 728394. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:11:47,128][00323] Avg episode reward: [(0, '18.365')] |
|
[2025-03-10 15:11:52,132][00323] Fps is (10 sec: 4093.7, 60 sec: 3890.8, 300 sec: 3957.1). Total num frames: 2932736. Throughput: 0: 993.2. Samples: 734646. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:11:52,133][00323] Avg episode reward: [(0, '17.885')] |
|
[2025-03-10 15:11:56,440][02726] Updated weights for policy 0, policy_version 720 (0.0022) |
|
[2025-03-10 15:11:57,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2949120. Throughput: 0: 990.2. Samples: 736652. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:11:57,127][00323] Avg episode reward: [(0, '18.521')] |
|
[2025-03-10 15:12:02,126][00323] Fps is (10 sec: 4098.3, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 2973696. Throughput: 0: 997.2. Samples: 743128. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:12:02,134][00323] Avg episode reward: [(0, '18.609')] |
|
[2025-03-10 15:12:05,499][02726] Updated weights for policy 0, policy_version 730 (0.0017) |
|
[2025-03-10 15:12:07,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3959.7, 300 sec: 3971.0). Total num frames: 2994176. Throughput: 0: 989.0. Samples: 749510. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:12:07,128][00323] Avg episode reward: [(0, '19.884')] |
|
[2025-03-10 15:12:12,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3010560. Throughput: 0: 990.5. Samples: 751630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:12:12,132][00323] Avg episode reward: [(0, '20.660')] |
|
[2025-03-10 15:12:12,138][02713] Saving new best policy, reward=20.660! |
|
[2025-03-10 15:12:15,959][02726] Updated weights for policy 0, policy_version 740 (0.0039) |
|
[2025-03-10 15:12:17,126][00323] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3035136. Throughput: 0: 1002.0. Samples: 758436. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:12:17,131][00323] Avg episode reward: [(0, '21.458')] |
|
[2025-03-10 15:12:17,135][02713] Saving new best policy, reward=21.458! |
|
[2025-03-10 15:12:22,130][00323] Fps is (10 sec: 4504.0, 60 sec: 3959.2, 300 sec: 3971.0). Total num frames: 3055616. Throughput: 0: 991.0. Samples: 764562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:12:22,131][00323] Avg episode reward: [(0, '20.710')] |
|
[2025-03-10 15:12:26,931][02726] Updated weights for policy 0, policy_version 750 (0.0016) |
|
[2025-03-10 15:12:27,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3072000. Throughput: 0: 992.0. Samples: 766588. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:12:27,129][00323] Avg episode reward: [(0, '20.671')] |
|
[2025-03-10 15:12:32,126][00323] Fps is (10 sec: 3687.7, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3092480. Throughput: 0: 999.6. Samples: 773376. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:12:32,127][00323] Avg episode reward: [(0, '20.626')] |
|
[2025-03-10 15:12:36,166][02726] Updated weights for policy 0, policy_version 760 (0.0015) |
|
[2025-03-10 15:12:37,127][00323] Fps is (10 sec: 4095.4, 60 sec: 3959.4, 300 sec: 3957.1). Total num frames: 3112960. Throughput: 0: 995.3. Samples: 779430. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:12:37,129][00323] Avg episode reward: [(0, '20.635')] |
|
[2025-03-10 15:12:42,126][00323] Fps is (10 sec: 4095.8, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3133440. Throughput: 0: 1001.8. Samples: 781732. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:12:42,133][00323] Avg episode reward: [(0, '20.707')] |
|
[2025-03-10 15:12:42,140][02713] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000765_3133440.pth... |
|
[2025-03-10 15:12:42,263][02713] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000532_2179072.pth |
|
[2025-03-10 15:12:46,277][02726] Updated weights for policy 0, policy_version 770 (0.0043) |
|
[2025-03-10 15:12:47,126][00323] Fps is (10 sec: 4096.6, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3153920. Throughput: 0: 1014.8. Samples: 788794. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:12:47,131][00323] Avg episode reward: [(0, '20.217')] |
|
[2025-03-10 15:12:52,126][00323] Fps is (10 sec: 4096.2, 60 sec: 4028.1, 300 sec: 3985.0). Total num frames: 3174400. Throughput: 0: 1003.1. Samples: 794648. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:12:52,130][00323] Avg episode reward: [(0, '21.242')] |
|
[2025-03-10 15:12:56,925][02726] Updated weights for policy 0, policy_version 780 (0.0016) |
|
[2025-03-10 15:12:57,126][00323] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 3194880. Throughput: 0: 1010.5. Samples: 797104. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:12:57,128][00323] Avg episode reward: [(0, '22.461')] |
|
[2025-03-10 15:12:57,130][02713] Saving new best policy, reward=22.461! |
|
[2025-03-10 15:13:02,126][00323] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 3215360. Throughput: 0: 1008.1. Samples: 803800. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:13:02,131][00323] Avg episode reward: [(0, '22.802')] |
|
[2025-03-10 15:13:02,145][02713] Saving new best policy, reward=22.802! |
|
[2025-03-10 15:13:07,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 3231744. Throughput: 0: 995.0. Samples: 809332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:13:07,130][00323] Avg episode reward: [(0, '23.442')] |
|
[2025-03-10 15:13:07,132][02713] Saving new best policy, reward=23.442! |
|
[2025-03-10 15:13:07,490][02726] Updated weights for policy 0, policy_version 790 (0.0017) |
|
[2025-03-10 15:13:12,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 3252224. Throughput: 0: 1008.3. Samples: 811960. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:13:12,127][00323] Avg episode reward: [(0, '24.418')] |
|
[2025-03-10 15:13:12,133][02713] Saving new best policy, reward=24.418! |
|
[2025-03-10 15:13:16,901][02726] Updated weights for policy 0, policy_version 800 (0.0016) |
|
[2025-03-10 15:13:17,126][00323] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 3276800. Throughput: 0: 1009.9. Samples: 818822. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:13:17,127][00323] Avg episode reward: [(0, '21.821')] |
|
[2025-03-10 15:13:22,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3959.7, 300 sec: 3984.9). Total num frames: 3293184. Throughput: 0: 998.9. Samples: 824380. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:13:22,129][00323] Avg episode reward: [(0, '22.114')] |
|
[2025-03-10 15:13:27,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 3313664. Throughput: 0: 1006.4. Samples: 827018. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:13:27,131][00323] Avg episode reward: [(0, '20.913')] |
|
[2025-03-10 15:13:27,672][02726] Updated weights for policy 0, policy_version 810 (0.0016) |
|
[2025-03-10 15:13:32,126][00323] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 3334144. Throughput: 0: 1004.9. Samples: 834016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:13:32,127][00323] Avg episode reward: [(0, '19.935')] |
|
[2025-03-10 15:13:37,127][00323] Fps is (10 sec: 3686.1, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 3350528. Throughput: 0: 981.2. Samples: 838802. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-10 15:13:37,128][00323] Avg episode reward: [(0, '19.789')] |
|
[2025-03-10 15:13:38,954][02726] Updated weights for policy 0, policy_version 820 (0.0025) |
|
[2025-03-10 15:13:42,126][00323] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 3371008. Throughput: 0: 986.6. Samples: 841502. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:13:42,131][00323] Avg episode reward: [(0, '19.985')] |
|
[2025-03-10 15:13:47,126][00323] Fps is (10 sec: 4096.3, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 3391488. Throughput: 0: 989.8. Samples: 848342. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:13:47,127][00323] Avg episode reward: [(0, '21.002')] |
|
[2025-03-10 15:13:48,059][02726] Updated weights for policy 0, policy_version 830 (0.0019) |
|
[2025-03-10 15:13:52,127][00323] Fps is (10 sec: 3686.2, 60 sec: 3891.2, 300 sec: 3957.1). Total num frames: 3407872. Throughput: 0: 979.5. Samples: 853408. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:13:52,130][00323] Avg episode reward: [(0, '20.639')] |
|
[2025-03-10 15:13:57,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 3428352. Throughput: 0: 985.1. Samples: 856288. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:13:57,130][00323] Avg episode reward: [(0, '19.198')] |
|
[2025-03-10 15:13:58,966][02726] Updated weights for policy 0, policy_version 840 (0.0033) |
|
[2025-03-10 15:14:02,126][00323] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 3452928. Throughput: 0: 985.0. Samples: 863146. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-10 15:14:02,128][00323] Avg episode reward: [(0, '19.724')] |
|
[2025-03-10 15:14:07,131][00323] Fps is (10 sec: 4094.1, 60 sec: 3959.2, 300 sec: 3971.0). Total num frames: 3469312. Throughput: 0: 971.6. Samples: 868108. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:14:07,132][00323] Avg episode reward: [(0, '19.241')] |
|
[2025-03-10 15:14:10,029][02726] Updated weights for policy 0, policy_version 850 (0.0021) |
|
[2025-03-10 15:14:12,126][00323] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 3489792. Throughput: 0: 977.1. Samples: 870988. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:14:12,130][00323] Avg episode reward: [(0, '18.168')] |
|
[2025-03-10 15:14:17,126][00323] Fps is (10 sec: 4097.9, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 3510272. Throughput: 0: 965.2. Samples: 877450. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:14:17,131][00323] Avg episode reward: [(0, '19.515')] |
|
[2025-03-10 15:14:20,633][02726] Updated weights for policy 0, policy_version 860 (0.0018) |
|
[2025-03-10 15:14:22,133][00323] Fps is (10 sec: 3274.7, 60 sec: 3822.5, 300 sec: 3943.2). Total num frames: 3522560. Throughput: 0: 965.2. Samples: 882242. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:14:22,134][00323] Avg episode reward: [(0, '19.919')] |
|
[2025-03-10 15:14:27,126][00323] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3943.3). Total num frames: 3543040. Throughput: 0: 969.6. Samples: 885132. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:14:27,127][00323] Avg episode reward: [(0, '19.971')] |
|
[2025-03-10 15:14:30,795][02726] Updated weights for policy 0, policy_version 870 (0.0016) |
|
[2025-03-10 15:14:32,126][00323] Fps is (10 sec: 4508.6, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3567616. Throughput: 0: 964.2. Samples: 891732. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:14:32,127][00323] Avg episode reward: [(0, '19.808')] |
|
[2025-03-10 15:14:37,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3929.4). Total num frames: 3579904. Throughput: 0: 952.1. Samples: 896254. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:14:37,131][00323] Avg episode reward: [(0, '20.282')] |
|
[2025-03-10 15:14:42,080][02726] Updated weights for policy 0, policy_version 880 (0.0013) |
|
[2025-03-10 15:14:42,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3604480. Throughput: 0: 955.9. Samples: 899302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-03-10 15:14:42,131][00323] Avg episode reward: [(0, '19.646')] |
|
[2025-03-10 15:14:42,138][02713] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000880_3604480.pth... |
|
[2025-03-10 15:14:42,260][02713] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000648_2654208.pth |
|
[2025-03-10 15:14:47,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3624960. Throughput: 0: 953.7. Samples: 906062. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:14:47,134][00323] Avg episode reward: [(0, '20.932')] |
|
[2025-03-10 15:14:52,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3641344. Throughput: 0: 953.7. Samples: 911020. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-03-10 15:14:52,128][00323] Avg episode reward: [(0, '20.629')] |
|
[2025-03-10 15:14:53,065][02726] Updated weights for policy 0, policy_version 890 (0.0024) |
|
[2025-03-10 15:14:57,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3661824. Throughput: 0: 963.3. Samples: 914338. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:14:57,127][00323] Avg episode reward: [(0, '21.002')] |
|
[2025-03-10 15:15:01,883][02726] Updated weights for policy 0, policy_version 900 (0.0026) |
|
[2025-03-10 15:15:02,126][00323] Fps is (10 sec: 4505.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3686400. Throughput: 0: 974.6. Samples: 921306. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:15:02,128][00323] Avg episode reward: [(0, '21.333')] |
|
[2025-03-10 15:15:07,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3823.2, 300 sec: 3943.3). Total num frames: 3698688. Throughput: 0: 975.4. Samples: 926130. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:15:07,127][00323] Avg episode reward: [(0, '21.835')] |
|
[2025-03-10 15:15:12,126][00323] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 3723264. Throughput: 0: 986.9. Samples: 929544. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:15:12,127][00323] Avg episode reward: [(0, '20.469')] |
|
[2025-03-10 15:15:12,523][02726] Updated weights for policy 0, policy_version 910 (0.0031) |
|
[2025-03-10 15:15:17,126][00323] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3747840. Throughput: 0: 995.5. Samples: 936528. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-03-10 15:15:17,128][00323] Avg episode reward: [(0, '21.908')] |
|
[2025-03-10 15:15:22,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.9, 300 sec: 3957.2). Total num frames: 3760128. Throughput: 0: 1003.2. Samples: 941398. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-03-10 15:15:22,127][00323] Avg episode reward: [(0, '21.898')] |
|
[2025-03-10 15:15:23,200][02726] Updated weights for policy 0, policy_version 920 (0.0014) |
|
[2025-03-10 15:15:27,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3784704. Throughput: 0: 1010.9. Samples: 944794. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-03-10 15:15:27,132][00323] Avg episode reward: [(0, '21.122')] |
|
[2025-03-10 15:15:32,126][00323] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3805184. Throughput: 0: 1015.4. Samples: 951756. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:15:32,131][00323] Avg episode reward: [(0, '21.556')] |
|
[2025-03-10 15:15:32,287][02726] Updated weights for policy 0, policy_version 930 (0.0021) |
|
[2025-03-10 15:15:37,126][00323] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3821568. Throughput: 0: 1008.4. Samples: 956396. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:15:37,127][00323] Avg episode reward: [(0, '21.465')] |
|
[2025-03-10 15:15:42,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3842048. Throughput: 0: 1009.1. Samples: 959748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:15:42,129][00323] Avg episode reward: [(0, '20.784')] |
|
[2025-03-10 15:15:43,207][02726] Updated weights for policy 0, policy_version 940 (0.0024) |
|
[2025-03-10 15:15:47,126][00323] Fps is (10 sec: 4505.5, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3866624. Throughput: 0: 1004.8. Samples: 966522. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:15:47,129][00323] Avg episode reward: [(0, '20.231')] |
|
[2025-03-10 15:15:52,126][00323] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 3957.1). Total num frames: 3878912. Throughput: 0: 1001.1. Samples: 971178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:15:52,132][00323] Avg episode reward: [(0, '21.030')] |
|
[2025-03-10 15:15:54,181][02726] Updated weights for policy 0, policy_version 950 (0.0020) |
|
[2025-03-10 15:15:57,127][00323] Fps is (10 sec: 3686.1, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3903488. Throughput: 0: 997.9. Samples: 974452. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:15:57,131][00323] Avg episode reward: [(0, '21.660')] |
|
[2025-03-10 15:16:02,126][00323] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3923968. Throughput: 0: 994.8. Samples: 981292. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:16:02,132][00323] Avg episode reward: [(0, '22.224')] |
|
[2025-03-10 15:16:04,149][02726] Updated weights for policy 0, policy_version 960 (0.0018) |
|
[2025-03-10 15:16:07,126][00323] Fps is (10 sec: 3686.8, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3940352. Throughput: 0: 987.7. Samples: 985844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:16:07,127][00323] Avg episode reward: [(0, '22.803')] |
|
[2025-03-10 15:16:12,126][00323] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3960832. Throughput: 0: 985.9. Samples: 989160. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:16:12,127][00323] Avg episode reward: [(0, '22.539')] |
|
[2025-03-10 15:16:14,504][02726] Updated weights for policy 0, policy_version 970 (0.0031) |
|
[2025-03-10 15:16:17,126][00323] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3981312. Throughput: 0: 979.7. Samples: 995844. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-03-10 15:16:17,128][00323] Avg episode reward: [(0, '22.941')] |
|
[2025-03-10 15:16:22,126][00323] Fps is (10 sec: 3686.3, 60 sec: 3959.4, 300 sec: 3957.1). Total num frames: 3997696. Throughput: 0: 980.8. Samples: 1000532. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-03-10 15:16:22,133][00323] Avg episode reward: [(0, '22.057')] |
|
[2025-03-10 15:16:23,515][00323] Component Batcher_0 stopped! |
|
[2025-03-10 15:16:23,515][02713] Stopping Batcher_0... |
|
[2025-03-10 15:16:23,519][02713] Loop batcher_evt_loop terminating... |
|
[2025-03-10 15:16:23,516][02713] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-03-10 15:16:23,582][02726] Weights refcount: 2 0 |
|
[2025-03-10 15:16:23,586][00323] Component InferenceWorker_p0-w0 stopped! |
|
[2025-03-10 15:16:23,586][02726] Stopping InferenceWorker_p0-w0... |
|
[2025-03-10 15:16:23,590][02726] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-03-10 15:16:23,643][02713] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000765_3133440.pth |
|
[2025-03-10 15:16:23,676][02713] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-03-10 15:16:23,814][00323] Component RolloutWorker_w0 stopped! |
|
[2025-03-10 15:16:23,814][02727] Stopping RolloutWorker_w0... |
|
[2025-03-10 15:16:23,820][02727] Loop rollout_proc0_evt_loop terminating... |
|
[2025-03-10 15:16:23,863][00323] Component LearnerWorker_p0 stopped! |
|
[2025-03-10 15:16:23,863][02713] Stopping LearnerWorker_p0... |
|
[2025-03-10 15:16:23,868][02713] Loop learner_proc0_evt_loop terminating... |
|
[2025-03-10 15:16:23,871][00323] Component RolloutWorker_w2 stopped! |
|
[2025-03-10 15:16:23,872][02730] Stopping RolloutWorker_w2... |
|
[2025-03-10 15:16:23,873][02730] Loop rollout_proc2_evt_loop terminating... |
|
[2025-03-10 15:16:23,883][00323] Component RolloutWorker_w6 stopped! |
|
[2025-03-10 15:16:23,884][02733] Stopping RolloutWorker_w6... |
|
[2025-03-10 15:16:23,886][02733] Loop rollout_proc6_evt_loop terminating... |
|
[2025-03-10 15:16:23,931][00323] Component RolloutWorker_w4 stopped! |
|
[2025-03-10 15:16:23,933][02731] Stopping RolloutWorker_w4... |
|
[2025-03-10 15:16:23,934][02731] Loop rollout_proc4_evt_loop terminating... |
|
[2025-03-10 15:16:24,016][02728] Stopping RolloutWorker_w1... |
|
[2025-03-10 15:16:24,016][00323] Component RolloutWorker_w1 stopped! |
|
[2025-03-10 15:16:24,016][02728] Loop rollout_proc1_evt_loop terminating... |
|
[2025-03-10 15:16:24,028][00323] Component RolloutWorker_w3 stopped! |
|
[2025-03-10 15:16:24,032][02729] Stopping RolloutWorker_w3... |
|
[2025-03-10 15:16:24,034][02729] Loop rollout_proc3_evt_loop terminating... |
|
[2025-03-10 15:16:24,037][00323] Component RolloutWorker_w5 stopped! |
|
[2025-03-10 15:16:24,043][02732] Stopping RolloutWorker_w5... |
|
[2025-03-10 15:16:24,043][02732] Loop rollout_proc5_evt_loop terminating... |
|
[2025-03-10 15:16:24,059][00323] Component RolloutWorker_w7 stopped! |
|
[2025-03-10 15:16:24,063][00323] Waiting for process learner_proc0 to stop... |
|
[2025-03-10 15:16:24,067][02734] Stopping RolloutWorker_w7... |
|
[2025-03-10 15:16:24,067][02734] Loop rollout_proc7_evt_loop terminating... |
|
[2025-03-10 15:16:25,891][00323] Waiting for process inference_proc0-0 to join... |
|
[2025-03-10 15:16:25,903][00323] Waiting for process rollout_proc0 to join... |
|
[2025-03-10 15:16:28,027][00323] Waiting for process rollout_proc1 to join... |
|
[2025-03-10 15:16:28,029][00323] Waiting for process rollout_proc2 to join... |
|
[2025-03-10 15:16:28,032][00323] Waiting for process rollout_proc3 to join... |
|
[2025-03-10 15:16:28,034][00323] Waiting for process rollout_proc4 to join... |
|
[2025-03-10 15:16:28,035][00323] Waiting for process rollout_proc5 to join... |
|
[2025-03-10 15:16:28,038][00323] Waiting for process rollout_proc6 to join... |
|
[2025-03-10 15:16:28,040][00323] Waiting for process rollout_proc7 to join... |
|
[2025-03-10 15:16:28,043][00323] Batcher 0 profile tree view: |
|
batching: 26.1252, releasing_batches: 0.0290 |
|
[2025-03-10 15:16:28,044][00323] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0093 |
|
wait_policy_total: 404.4311 |
|
update_model: 8.3236 |
|
weight_update: 0.0026 |
|
one_step: 0.0058 |
|
handle_policy_step: 582.4455 |
|
deserialize: 14.1570, stack: 3.0206, obs_to_device_normalize: 121.7784, forward: 299.8394, send_messages: 27.9335 |
|
prepare_outputs: 90.2868 |
|
to_cpu: 55.9679 |
|
[2025-03-10 15:16:28,045][00323] Learner 0 profile tree view: |
|
misc: 0.0048, prepare_batch: 13.0198 |
|
train: 73.0181 |
|
epoch_init: 0.0044, minibatch_init: 0.0059, losses_postprocess: 0.6595, kl_divergence: 0.7106, after_optimizer: 33.5805 |
|
calculate_losses: 25.7358 |
|
losses_init: 0.0035, forward_head: 1.3776, bptt_initial: 16.8832, tail: 1.1778, advantages_returns: 0.2786, losses: 3.5536 |
|
bptt: 2.1886 |
|
bptt_forward_core: 2.1208 |
|
update: 11.6032 |
|
clip: 0.8776 |
|
[2025-03-10 15:16:28,048][00323] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.2987, enqueue_policy_requests: 99.4272, env_step: 816.3022, overhead: 12.2669, complete_rollouts: 7.0703 |
|
save_policy_outputs: 18.2786 |
|
split_output_tensors: 7.2607 |
|
[2025-03-10 15:16:28,049][00323] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.2382, enqueue_policy_requests: 101.2742, env_step: 811.0664, overhead: 12.5095, complete_rollouts: 7.5359 |
|
save_policy_outputs: 17.9381 |
|
split_output_tensors: 7.1257 |
|
[2025-03-10 15:16:28,051][00323] Loop Runner_EvtLoop terminating... |
|
[2025-03-10 15:16:28,053][00323] Runner profile tree view: |
|
main_loop: 1060.2422 |
|
[2025-03-10 15:16:28,055][00323] Collected {0: 4005888}, FPS: 3778.3 |
|
[2025-03-10 16:03:53,147][00323] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-03-10 16:03:53,149][00323] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-03-10 16:03:53,149][00323] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-03-10 16:03:53,150][00323] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-03-10 16:03:53,151][00323] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-03-10 16:03:53,151][00323] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-03-10 16:03:53,152][00323] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-03-10 16:03:53,153][00323] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-03-10 16:03:53,154][00323] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-03-10 16:03:53,155][00323] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-03-10 16:03:53,156][00323] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-03-10 16:03:53,156][00323] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-03-10 16:03:53,157][00323] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-03-10 16:03:53,158][00323] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-03-10 16:03:53,158][00323] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-03-10 16:03:53,193][00323] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-03-10 16:03:53,197][00323] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-10 16:03:53,199][00323] RunningMeanStd input shape: (1,) |
|
[2025-03-10 16:03:53,220][00323] ConvEncoder: input_channels=3 |
|
[2025-03-10 16:03:53,320][00323] Conv encoder output size: 512 |
|
[2025-03-10 16:03:53,321][00323] Policy head output size: 512 |
|
[2025-03-10 16:03:53,511][00323] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-03-10 16:03:54,293][00323] Num frames 100... |
|
[2025-03-10 16:03:54,424][00323] Num frames 200... |
|
[2025-03-10 16:03:54,554][00323] Num frames 300... |
|
[2025-03-10 16:03:54,680][00323] Num frames 400... |
|
[2025-03-10 16:03:54,808][00323] Num frames 500... |
|
[2025-03-10 16:03:54,882][00323] Avg episode rewards: #0: 10.140, true rewards: #0: 5.140 |
|
[2025-03-10 16:03:54,883][00323] Avg episode reward: 10.140, avg true_objective: 5.140 |
|
[2025-03-10 16:03:54,992][00323] Num frames 600... |
|
[2025-03-10 16:03:55,121][00323] Num frames 700... |
|
[2025-03-10 16:03:55,248][00323] Num frames 800... |
|
[2025-03-10 16:03:55,387][00323] Num frames 900... |
|
[2025-03-10 16:03:55,517][00323] Num frames 1000... |
|
[2025-03-10 16:03:55,569][00323] Avg episode rewards: #0: 10.500, true rewards: #0: 5.000 |
|
[2025-03-10 16:03:55,570][00323] Avg episode reward: 10.500, avg true_objective: 5.000 |
|
[2025-03-10 16:03:55,698][00323] Num frames 1100... |
|
[2025-03-10 16:03:55,863][00323] Num frames 1200... |
|
[2025-03-10 16:03:56,038][00323] Num frames 1300... |
|
[2025-03-10 16:03:56,208][00323] Num frames 1400... |
|
[2025-03-10 16:03:56,384][00323] Num frames 1500... |
|
[2025-03-10 16:03:56,551][00323] Num frames 1600... |
|
[2025-03-10 16:03:56,722][00323] Num frames 1700... |
|
[2025-03-10 16:03:56,888][00323] Num frames 1800... |
|
[2025-03-10 16:03:57,053][00323] Num frames 1900... |
|
[2025-03-10 16:03:57,225][00323] Num frames 2000... |
|
[2025-03-10 16:03:57,422][00323] Num frames 2100... |
|
[2025-03-10 16:03:57,599][00323] Num frames 2200... |
|
[2025-03-10 16:03:57,779][00323] Num frames 2300... |
|
[2025-03-10 16:03:57,931][00323] Num frames 2400... |
|
[2025-03-10 16:03:58,058][00323] Num frames 2500... |
|
[2025-03-10 16:03:58,187][00323] Num frames 2600... |
|
[2025-03-10 16:03:58,318][00323] Num frames 2700... |
|
[2025-03-10 16:03:58,458][00323] Num frames 2800... |
|
[2025-03-10 16:03:58,588][00323] Num frames 2900... |
|
[2025-03-10 16:03:58,717][00323] Num frames 3000... |
|
[2025-03-10 16:03:58,848][00323] Num frames 3100... |
|
[2025-03-10 16:03:58,899][00323] Avg episode rewards: #0: 24.333, true rewards: #0: 10.333 |
|
[2025-03-10 16:03:58,900][00323] Avg episode reward: 24.333, avg true_objective: 10.333 |
|
[2025-03-10 16:03:59,035][00323] Num frames 3200... |
|
[2025-03-10 16:03:59,162][00323] Num frames 3300... |
|
[2025-03-10 16:03:59,292][00323] Num frames 3400... |
|
[2025-03-10 16:03:59,427][00323] Num frames 3500... |
|
[2025-03-10 16:03:59,558][00323] Num frames 3600... |
|
[2025-03-10 16:03:59,682][00323] Num frames 3700... |
|
[2025-03-10 16:03:59,850][00323] Num frames 3800... |
|
[2025-03-10 16:03:59,978][00323] Num frames 3900... |
|
[2025-03-10 16:04:00,107][00323] Num frames 4000... |
|
[2025-03-10 16:04:00,234][00323] Num frames 4100... |
|
[2025-03-10 16:04:00,359][00323] Num frames 4200... |
|
[2025-03-10 16:04:00,501][00323] Num frames 4300... |
|
[2025-03-10 16:04:00,627][00323] Num frames 4400... |
|
[2025-03-10 16:04:00,756][00323] Num frames 4500... |
|
[2025-03-10 16:04:00,880][00323] Avg episode rewards: #0: 27.135, true rewards: #0: 11.385 |
|
[2025-03-10 16:04:00,881][00323] Avg episode reward: 27.135, avg true_objective: 11.385 |
|
[2025-03-10 16:04:00,944][00323] Num frames 4600... |
|
[2025-03-10 16:04:01,072][00323] Num frames 4700... |
|
[2025-03-10 16:04:01,199][00323] Num frames 4800... |
|
[2025-03-10 16:04:01,327][00323] Num frames 4900... |
|
[2025-03-10 16:04:01,462][00323] Num frames 5000... |
|
[2025-03-10 16:04:01,596][00323] Num frames 5100... |
|
[2025-03-10 16:04:01,695][00323] Avg episode rewards: #0: 24.068, true rewards: #0: 10.268 |
|
[2025-03-10 16:04:01,696][00323] Avg episode reward: 24.068, avg true_objective: 10.268 |
|
[2025-03-10 16:04:01,784][00323] Num frames 5200... |
|
[2025-03-10 16:04:01,918][00323] Num frames 5300... |
|
[2025-03-10 16:04:02,046][00323] Num frames 5400... |
|
[2025-03-10 16:04:02,176][00323] Num frames 5500... |
|
[2025-03-10 16:04:02,304][00323] Num frames 5600... |
|
[2025-03-10 16:04:02,433][00323] Num frames 5700... |
|
[2025-03-10 16:04:02,501][00323] Avg episode rewards: #0: 21.850, true rewards: #0: 9.517 |
|
[2025-03-10 16:04:02,504][00323] Avg episode reward: 21.850, avg true_objective: 9.517 |
|
[2025-03-10 16:04:02,617][00323] Num frames 5800... |
|
[2025-03-10 16:04:02,748][00323] Num frames 5900... |
|
[2025-03-10 16:04:02,874][00323] Num frames 6000... |
|
[2025-03-10 16:04:03,002][00323] Num frames 6100... |
|
[2025-03-10 16:04:03,129][00323] Num frames 6200... |
|
[2025-03-10 16:04:03,256][00323] Num frames 6300... |
|
[2025-03-10 16:04:03,384][00323] Num frames 6400... |
|
[2025-03-10 16:04:03,516][00323] Num frames 6500... |
|
[2025-03-10 16:04:03,626][00323] Avg episode rewards: #0: 21.340, true rewards: #0: 9.340 |
|
[2025-03-10 16:04:03,627][00323] Avg episode reward: 21.340, avg true_objective: 9.340 |
|
[2025-03-10 16:04:03,706][00323] Num frames 6600... |
|
[2025-03-10 16:04:03,840][00323] Num frames 6700... |
|
[2025-03-10 16:04:03,970][00323] Num frames 6800... |
|
[2025-03-10 16:04:04,095][00323] Num frames 6900... |
|
[2025-03-10 16:04:04,220][00323] Num frames 7000... |
|
[2025-03-10 16:04:04,345][00323] Num frames 7100... |
|
[2025-03-10 16:04:04,477][00323] Num frames 7200... |
|
[2025-03-10 16:04:04,610][00323] Num frames 7300... |
|
[2025-03-10 16:04:04,738][00323] Num frames 7400... |
|
[2025-03-10 16:04:04,865][00323] Num frames 7500... |
|
[2025-03-10 16:04:04,994][00323] Num frames 7600... |
|
[2025-03-10 16:04:05,119][00323] Num frames 7700... |
|
[2025-03-10 16:04:05,246][00323] Num frames 7800... |
|
[2025-03-10 16:04:05,382][00323] Num frames 7900... |
|
[2025-03-10 16:04:05,511][00323] Num frames 8000... |
|
[2025-03-10 16:04:05,647][00323] Num frames 8100... |
|
[2025-03-10 16:04:05,710][00323] Avg episode rewards: #0: 22.757, true rewards: #0: 10.132 |
|
[2025-03-10 16:04:05,711][00323] Avg episode reward: 22.757, avg true_objective: 10.132 |
|
[2025-03-10 16:04:05,832][00323] Num frames 8200... |
|
[2025-03-10 16:04:05,959][00323] Num frames 8300... |
|
[2025-03-10 16:04:06,085][00323] Num frames 8400... |
|
[2025-03-10 16:04:06,212][00323] Num frames 8500... |
|
[2025-03-10 16:04:06,335][00323] Avg episode rewards: #0: 20.838, true rewards: #0: 9.504 |
|
[2025-03-10 16:04:06,336][00323] Avg episode reward: 20.838, avg true_objective: 9.504 |
|
[2025-03-10 16:04:06,400][00323] Num frames 8600... |
|
[2025-03-10 16:04:06,526][00323] Num frames 8700... |
|
[2025-03-10 16:04:06,661][00323] Num frames 8800... |
|
[2025-03-10 16:04:06,788][00323] Num frames 8900... |
|
[2025-03-10 16:04:06,916][00323] Num frames 9000... |
|
[2025-03-10 16:04:07,047][00323] Num frames 9100... |
|
[2025-03-10 16:04:07,175][00323] Num frames 9200... |
|
[2025-03-10 16:04:07,302][00323] Num frames 9300... |
|
[2025-03-10 16:04:07,432][00323] Num frames 9400... |
|
[2025-03-10 16:04:07,559][00323] Num frames 9500... |
|
[2025-03-10 16:04:07,701][00323] Num frames 9600... |
|
[2025-03-10 16:04:07,797][00323] Avg episode rewards: #0: 21.129, true rewards: #0: 9.629 |
|
[2025-03-10 16:04:07,799][00323] Avg episode reward: 21.129, avg true_objective: 9.629 |
|
[2025-03-10 16:05:06,430][00323] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-03-10 16:06:39,903][00323] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-03-10 16:06:39,904][00323] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-03-10 16:06:39,905][00323] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-03-10 16:06:39,905][00323] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-03-10 16:06:39,906][00323] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-03-10 16:06:39,907][00323] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-03-10 16:06:39,908][00323] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-03-10 16:06:39,909][00323] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-03-10 16:06:39,910][00323] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-03-10 16:06:39,910][00323] Adding new argument 'hf_repository'='guife33/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-03-10 16:06:39,911][00323] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-03-10 16:06:39,912][00323] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-03-10 16:06:39,913][00323] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-03-10 16:06:39,913][00323] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-03-10 16:06:39,914][00323] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-03-10 16:06:39,939][00323] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-10 16:06:39,941][00323] RunningMeanStd input shape: (1,) |
|
[2025-03-10 16:06:39,952][00323] ConvEncoder: input_channels=3 |
|
[2025-03-10 16:06:39,985][00323] Conv encoder output size: 512 |
|
[2025-03-10 16:06:39,986][00323] Policy head output size: 512 |
|
[2025-03-10 16:06:40,004][00323] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-03-10 16:06:40,443][00323] Num frames 100... |
|
[2025-03-10 16:06:40,571][00323] Num frames 200... |
|
[2025-03-10 16:06:40,698][00323] Num frames 300... |
|
[2025-03-10 16:06:40,826][00323] Num frames 400... |
|
[2025-03-10 16:06:40,957][00323] Num frames 500... |
|
[2025-03-10 16:06:41,083][00323] Num frames 600... |
|
[2025-03-10 16:06:41,196][00323] Avg episode rewards: #0: 14.470, true rewards: #0: 6.470 |
|
[2025-03-10 16:06:41,197][00323] Avg episode reward: 14.470, avg true_objective: 6.470 |
|
[2025-03-10 16:06:41,264][00323] Num frames 700... |
|
[2025-03-10 16:06:41,399][00323] Num frames 800... |
|
[2025-03-10 16:06:41,527][00323] Num frames 900... |
|
[2025-03-10 16:06:41,669][00323] Num frames 1000... |
|
[2025-03-10 16:06:41,852][00323] Num frames 1100... |
|
[2025-03-10 16:06:42,030][00323] Num frames 1200... |
|
[2025-03-10 16:06:42,218][00323] Num frames 1300... |
|
[2025-03-10 16:06:42,404][00323] Num frames 1400... |
|
[2025-03-10 16:06:42,591][00323] Num frames 1500... |
|
[2025-03-10 16:06:42,760][00323] Num frames 1600... |
|
[2025-03-10 16:06:42,928][00323] Num frames 1700... |
|
[2025-03-10 16:06:43,106][00323] Num frames 1800... |
|
[2025-03-10 16:06:43,287][00323] Num frames 1900... |
|
[2025-03-10 16:06:43,462][00323] Num frames 2000... |
|
[2025-03-10 16:06:43,649][00323] Num frames 2100... |
|
[2025-03-10 16:06:43,826][00323] Num frames 2200... |
|
[2025-03-10 16:06:43,962][00323] Num frames 2300... |
|
[2025-03-10 16:06:44,033][00323] Avg episode rewards: #0: 27.555, true rewards: #0: 11.555 |
|
[2025-03-10 16:06:44,034][00323] Avg episode reward: 27.555, avg true_objective: 11.555 |
|
[2025-03-10 16:06:44,147][00323] Num frames 2400... |
|
[2025-03-10 16:06:44,272][00323] Num frames 2500... |
|
[2025-03-10 16:06:44,404][00323] Num frames 2600... |
|
[2025-03-10 16:06:44,542][00323] Num frames 2700... |
|
[2025-03-10 16:06:44,669][00323] Num frames 2800... |
|
[2025-03-10 16:06:44,796][00323] Num frames 2900... |
|
[2025-03-10 16:06:44,922][00323] Num frames 3000... |
|
[2025-03-10 16:06:45,048][00323] Num frames 3100... |
|
[2025-03-10 16:06:45,193][00323] Avg episode rewards: #0: 24.563, true rewards: #0: 10.563 |
|
[2025-03-10 16:06:45,194][00323] Avg episode reward: 24.563, avg true_objective: 10.563 |
|
[2025-03-10 16:06:45,233][00323] Num frames 3200... |
|
[2025-03-10 16:06:45,360][00323] Num frames 3300... |
|
[2025-03-10 16:06:45,490][00323] Num frames 3400... |
|
[2025-03-10 16:06:45,622][00323] Num frames 3500... |
|
[2025-03-10 16:06:45,749][00323] Num frames 3600... |
|
[2025-03-10 16:06:45,872][00323] Num frames 3700... |
|
[2025-03-10 16:06:45,997][00323] Num frames 3800... |
|
[2025-03-10 16:06:46,123][00323] Num frames 3900... |
|
[2025-03-10 16:06:46,250][00323] Num frames 4000... |
|
[2025-03-10 16:06:46,379][00323] Num frames 4100... |
|
[2025-03-10 16:06:46,505][00323] Num frames 4200... |
|
[2025-03-10 16:06:46,646][00323] Num frames 4300... |
|
[2025-03-10 16:06:46,785][00323] Num frames 4400... |
|
[2025-03-10 16:06:46,919][00323] Num frames 4500... |
|
[2025-03-10 16:06:47,067][00323] Num frames 4600... |
|
[2025-03-10 16:06:47,207][00323] Num frames 4700... |
|
[2025-03-10 16:06:47,333][00323] Num frames 4800... |
|
[2025-03-10 16:06:47,470][00323] Num frames 4900... |
|
[2025-03-10 16:06:47,602][00323] Num frames 5000... |
|
[2025-03-10 16:06:47,733][00323] Num frames 5100... |
|
[2025-03-10 16:06:47,863][00323] Num frames 5200... |
|
[2025-03-10 16:06:48,004][00323] Avg episode rewards: #0: 32.422, true rewards: #0: 13.173 |
|
[2025-03-10 16:06:48,005][00323] Avg episode reward: 32.422, avg true_objective: 13.173 |
|
[2025-03-10 16:06:48,044][00323] Num frames 5300... |
|
[2025-03-10 16:06:48,165][00323] Num frames 5400... |
|
[2025-03-10 16:06:48,290][00323] Num frames 5500... |
|
[2025-03-10 16:06:48,421][00323] Num frames 5600... |
|
[2025-03-10 16:06:48,543][00323] Avg episode rewards: #0: 26.706, true rewards: #0: 11.306 |
|
[2025-03-10 16:06:48,544][00323] Avg episode reward: 26.706, avg true_objective: 11.306 |
|
[2025-03-10 16:06:48,604][00323] Num frames 5700... |
|
[2025-03-10 16:06:48,737][00323] Num frames 5800... |
|
[2025-03-10 16:06:48,861][00323] Num frames 5900... |
|
[2025-03-10 16:06:48,990][00323] Num frames 6000... |
|
[2025-03-10 16:06:49,117][00323] Num frames 6100... |
|
[2025-03-10 16:06:49,246][00323] Num frames 6200... |
|
[2025-03-10 16:06:49,378][00323] Num frames 6300... |
|
[2025-03-10 16:06:49,501][00323] Num frames 6400... |
|
[2025-03-10 16:06:49,626][00323] Num frames 6500... |
|
[2025-03-10 16:06:49,761][00323] Num frames 6600... |
|
[2025-03-10 16:06:49,889][00323] Num frames 6700... |
|
[2025-03-10 16:06:50,013][00323] Num frames 6800... |
|
[2025-03-10 16:06:50,140][00323] Num frames 6900... |
|
[2025-03-10 16:06:50,263][00323] Num frames 7000... |
|
[2025-03-10 16:06:50,388][00323] Num frames 7100... |
|
[2025-03-10 16:06:50,513][00323] Num frames 7200... |
|
[2025-03-10 16:06:50,641][00323] Num frames 7300... |
|
[2025-03-10 16:06:50,822][00323] Avg episode rewards: #0: 29.656, true rewards: #0: 12.323 |
|
[2025-03-10 16:06:50,823][00323] Avg episode reward: 29.656, avg true_objective: 12.323 |
|
[2025-03-10 16:06:50,832][00323] Num frames 7400... |
|
[2025-03-10 16:06:50,958][00323] Num frames 7500... |
|
[2025-03-10 16:06:51,083][00323] Num frames 7600... |
|
[2025-03-10 16:06:51,207][00323] Num frames 7700... |
|
[2025-03-10 16:06:51,335][00323] Num frames 7800... |
|
[2025-03-10 16:06:51,404][00323] Avg episode rewards: #0: 26.300, true rewards: #0: 11.157 |
|
[2025-03-10 16:06:51,405][00323] Avg episode reward: 26.300, avg true_objective: 11.157 |
|
[2025-03-10 16:06:51,518][00323] Num frames 7900... |
|
[2025-03-10 16:06:51,645][00323] Num frames 8000... |
|
[2025-03-10 16:06:51,781][00323] Num frames 8100... |
|
[2025-03-10 16:06:51,910][00323] Num frames 8200... |
|
[2025-03-10 16:06:52,038][00323] Avg episode rewards: #0: 23.697, true rewards: #0: 10.322 |
|
[2025-03-10 16:06:52,039][00323] Avg episode reward: 23.697, avg true_objective: 10.322 |
|
[2025-03-10 16:06:52,093][00323] Num frames 8300... |
|
[2025-03-10 16:06:52,227][00323] Num frames 8400... |
|
[2025-03-10 16:06:52,360][00323] Num frames 8500... |
|
[2025-03-10 16:06:52,489][00323] Num frames 8600... |
|
[2025-03-10 16:06:52,617][00323] Num frames 8700... |
|
[2025-03-10 16:06:52,756][00323] Num frames 8800... |
|
[2025-03-10 16:06:52,885][00323] Num frames 8900... |
|
[2025-03-10 16:06:53,012][00323] Num frames 9000... |
|
[2025-03-10 16:06:53,141][00323] Avg episode rewards: #0: 22.731, true rewards: #0: 10.064 |
|
[2025-03-10 16:06:53,142][00323] Avg episode reward: 22.731, avg true_objective: 10.064 |
|
[2025-03-10 16:06:53,198][00323] Num frames 9100... |
|
[2025-03-10 16:06:53,324][00323] Num frames 9200... |
|
[2025-03-10 16:06:53,458][00323] Num frames 9300... |
|
[2025-03-10 16:06:53,585][00323] Num frames 9400... |
|
[2025-03-10 16:06:53,715][00323] Num frames 9500... |
|
[2025-03-10 16:06:53,865][00323] Num frames 9600... |
|
[2025-03-10 16:06:54,042][00323] Num frames 9700... |
|
[2025-03-10 16:06:54,214][00323] Num frames 9800... |
|
[2025-03-10 16:06:54,384][00323] Num frames 9900... |
|
[2025-03-10 16:06:54,554][00323] Num frames 10000... |
|
[2025-03-10 16:06:54,750][00323] Avg episode rewards: #0: 22.382, true rewards: #0: 10.082 |
|
[2025-03-10 16:06:54,751][00323] Avg episode reward: 22.382, avg true_objective: 10.082 |
|
[2025-03-10 16:07:55,633][00323] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-03-10 16:08:06,440][00323] The model has been pushed to https://huggingface.co/guife33/rl_course_vizdoom_health_gathering_supreme |
|
[2025-03-10 16:17:28,524][00323] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-03-10 16:17:28,525][00323] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-03-10 16:17:28,526][00323] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-03-10 16:17:28,527][00323] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-03-10 16:17:28,528][00323] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-03-10 16:17:28,529][00323] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-03-10 16:17:28,529][00323] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-03-10 16:17:28,530][00323] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-03-10 16:17:28,531][00323] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-03-10 16:17:28,532][00323] Adding new argument 'hf_repository'='guife33/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-03-10 16:17:28,533][00323] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-03-10 16:17:28,534][00323] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-03-10 16:17:28,536][00323] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-03-10 16:17:28,537][00323] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-03-10 16:17:28,538][00323] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-03-10 16:17:28,562][00323] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-03-10 16:17:28,563][00323] RunningMeanStd input shape: (1,) |
|
[2025-03-10 16:17:28,574][00323] ConvEncoder: input_channels=3 |
|
[2025-03-10 16:17:28,606][00323] Conv encoder output size: 512 |
|
[2025-03-10 16:17:28,607][00323] Policy head output size: 512 |
|
[2025-03-10 16:17:28,625][00323] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-03-10 16:17:29,054][00323] Num frames 100... |
|
[2025-03-10 16:17:29,176][00323] Num frames 200... |
|
[2025-03-10 16:17:29,305][00323] Num frames 300... |
|
[2025-03-10 16:17:29,442][00323] Num frames 400... |
|
[2025-03-10 16:17:29,566][00323] Num frames 500... |
|
[2025-03-10 16:17:29,687][00323] Num frames 600... |
|
[2025-03-10 16:17:29,821][00323] Num frames 700... |
|
[2025-03-10 16:17:29,946][00323] Num frames 800... |
|
[2025-03-10 16:17:30,074][00323] Num frames 900... |
|
[2025-03-10 16:17:30,198][00323] Num frames 1000... |
|
[2025-03-10 16:17:30,326][00323] Num frames 1100... |
|
[2025-03-10 16:17:30,458][00323] Num frames 1200... |
|
[2025-03-10 16:17:30,586][00323] Num frames 1300... |
|
[2025-03-10 16:17:30,715][00323] Num frames 1400... |
|
[2025-03-10 16:17:30,848][00323] Num frames 1500... |
|
[2025-03-10 16:17:30,985][00323] Num frames 1600... |
|
[2025-03-10 16:17:31,114][00323] Num frames 1700... |
|
[2025-03-10 16:17:31,240][00323] Num frames 1800... |
|
[2025-03-10 16:17:31,372][00323] Num frames 1900... |
|
[2025-03-10 16:17:31,500][00323] Num frames 2000... |
|
[2025-03-10 16:17:31,635][00323] Num frames 2100... |
|
[2025-03-10 16:17:31,688][00323] Avg episode rewards: #0: 58.999, true rewards: #0: 21.000 |
|
[2025-03-10 16:17:31,688][00323] Avg episode reward: 58.999, avg true_objective: 21.000 |
|
[2025-03-10 16:17:31,824][00323] Num frames 2200... |
|
[2025-03-10 16:17:31,950][00323] Num frames 2300... |
|
[2025-03-10 16:17:32,078][00323] Num frames 2400... |
|
[2025-03-10 16:17:32,204][00323] Num frames 2500... |
|
[2025-03-10 16:17:32,343][00323] Avg episode rewards: #0: 34.329, true rewards: #0: 12.830 |
|
[2025-03-10 16:17:32,344][00323] Avg episode reward: 34.329, avg true_objective: 12.830 |
|
[2025-03-10 16:17:32,394][00323] Num frames 2600... |
|
[2025-03-10 16:17:32,521][00323] Num frames 2700... |
|
[2025-03-10 16:17:32,645][00323] Num frames 2800... |
|
[2025-03-10 16:17:32,776][00323] Num frames 2900... |
|
[2025-03-10 16:17:32,910][00323] Num frames 3000... |
|
[2025-03-10 16:17:33,038][00323] Num frames 3100... |
|
[2025-03-10 16:17:33,163][00323] Num frames 3200... |
|
[2025-03-10 16:17:33,291][00323] Num frames 3300... |
|
[2025-03-10 16:17:33,424][00323] Num frames 3400... |
|
[2025-03-10 16:17:33,551][00323] Num frames 3500... |
|
[2025-03-10 16:17:33,680][00323] Num frames 3600... |
|
[2025-03-10 16:17:33,808][00323] Num frames 3700... |
|
[2025-03-10 16:17:33,946][00323] Num frames 3800... |
|
[2025-03-10 16:17:34,074][00323] Num frames 3900... |
|
[2025-03-10 16:17:34,205][00323] Num frames 4000... |
|
[2025-03-10 16:17:34,334][00323] Num frames 4100... |
|
[2025-03-10 16:17:34,465][00323] Num frames 4200... |
|
[2025-03-10 16:17:34,592][00323] Num frames 4300... |
|
[2025-03-10 16:17:34,721][00323] Num frames 4400... |
|
[2025-03-10 16:17:34,848][00323] Num frames 4500... |
|
[2025-03-10 16:17:34,986][00323] Num frames 4600... |
|
[2025-03-10 16:17:35,126][00323] Avg episode rewards: #0: 43.553, true rewards: #0: 15.553 |
|
[2025-03-10 16:17:35,127][00323] Avg episode reward: 43.553, avg true_objective: 15.553 |
|
[2025-03-10 16:17:35,171][00323] Num frames 4700... |
|
[2025-03-10 16:17:35,296][00323] Num frames 4800... |
|
[2025-03-10 16:17:35,459][00323] Num frames 4900... |
|
[2025-03-10 16:17:35,632][00323] Num frames 5000... |
|
[2025-03-10 16:17:35,802][00323] Num frames 5100... |
|
[2025-03-10 16:17:35,976][00323] Num frames 5200... |
|
[2025-03-10 16:17:36,142][00323] Num frames 5300... |
|
[2025-03-10 16:17:36,307][00323] Num frames 5400... |
|
[2025-03-10 16:17:36,480][00323] Num frames 5500... |
|
[2025-03-10 16:17:36,641][00323] Avg episode rewards: #0: 36.904, true rewards: #0: 13.905 |
|
[2025-03-10 16:17:36,645][00323] Avg episode reward: 36.904, avg true_objective: 13.905 |
|
[2025-03-10 16:17:36,716][00323] Num frames 5600... |
|
[2025-03-10 16:17:36,888][00323] Num frames 5700... |
|
[2025-03-10 16:17:37,093][00323] Num frames 5800... |
|
[2025-03-10 16:17:37,282][00323] Num frames 5900... |
|
[2025-03-10 16:17:37,463][00323] Num frames 6000... |
|
[2025-03-10 16:17:37,638][00323] Num frames 6100... |
|
[2025-03-10 16:17:37,762][00323] Num frames 6200... |
|
[2025-03-10 16:17:37,886][00323] Num frames 6300... |
|
[2025-03-10 16:17:38,022][00323] Num frames 6400... |
|
[2025-03-10 16:17:38,151][00323] Avg episode rewards: #0: 32.918, true rewards: #0: 12.918 |
|
[2025-03-10 16:17:38,152][00323] Avg episode reward: 32.918, avg true_objective: 12.918 |
|
[2025-03-10 16:17:38,205][00323] Num frames 6500... |
|
[2025-03-10 16:17:38,330][00323] Num frames 6600... |
|
[2025-03-10 16:17:38,462][00323] Num frames 6700... |
|
[2025-03-10 16:17:38,589][00323] Num frames 6800... |
|
[2025-03-10 16:17:38,659][00323] Avg episode rewards: #0: 28.185, true rewards: #0: 11.352 |
|
[2025-03-10 16:17:38,660][00323] Avg episode reward: 28.185, avg true_objective: 11.352 |
|
[2025-03-10 16:17:38,771][00323] Num frames 6900... |
|
[2025-03-10 16:17:38,900][00323] Num frames 7000... |
|
[2025-03-10 16:17:39,035][00323] Num frames 7100... |
|
[2025-03-10 16:17:39,161][00323] Num frames 7200... |
|
[2025-03-10 16:17:39,286][00323] Num frames 7300... |
|
[2025-03-10 16:17:39,416][00323] Num frames 7400... |
|
[2025-03-10 16:17:39,542][00323] Num frames 7500... |
|
[2025-03-10 16:17:39,665][00323] Num frames 7600... |
|
[2025-03-10 16:17:39,792][00323] Num frames 7700... |
|
[2025-03-10 16:17:39,915][00323] Num frames 7800... |
|
[2025-03-10 16:17:40,048][00323] Num frames 7900... |
|
[2025-03-10 16:17:40,177][00323] Num frames 8000... |
|
[2025-03-10 16:17:40,303][00323] Num frames 8100... |
|
[2025-03-10 16:17:40,437][00323] Num frames 8200... |
|
[2025-03-10 16:17:40,568][00323] Num frames 8300... |
|
[2025-03-10 16:17:40,724][00323] Avg episode rewards: #0: 29.398, true rewards: #0: 11.970 |
|
[2025-03-10 16:17:40,725][00323] Avg episode reward: 29.398, avg true_objective: 11.970 |
|
[2025-03-10 16:17:40,754][00323] Num frames 8400... |
|
[2025-03-10 16:17:40,881][00323] Num frames 8500... |
|
[2025-03-10 16:17:41,011][00323] Num frames 8600... |
|
[2025-03-10 16:17:41,149][00323] Num frames 8700... |
|
[2025-03-10 16:17:41,276][00323] Num frames 8800... |
|
[2025-03-10 16:17:41,407][00323] Num frames 8900... |
|
[2025-03-10 16:17:41,534][00323] Num frames 9000... |
|
[2025-03-10 16:17:41,663][00323] Num frames 9100... |
|
[2025-03-10 16:17:41,791][00323] Num frames 9200... |
|
[2025-03-10 16:17:41,929][00323] Num frames 9300... |
|
[2025-03-10 16:17:42,061][00323] Num frames 9400... |
|
[2025-03-10 16:17:42,195][00323] Num frames 9500... |
|
[2025-03-10 16:17:42,327][00323] Num frames 9600... |
|
[2025-03-10 16:17:42,462][00323] Num frames 9700... |
|
[2025-03-10 16:17:42,595][00323] Num frames 9800... |
|
[2025-03-10 16:17:42,730][00323] Num frames 9900... |
|
[2025-03-10 16:17:42,865][00323] Num frames 10000... |
|
[2025-03-10 16:17:42,992][00323] Num frames 10100... |
|
[2025-03-10 16:17:43,134][00323] Num frames 10200... |
|
[2025-03-10 16:17:43,265][00323] Num frames 10300... |
|
[2025-03-10 16:17:43,406][00323] Num frames 10400... |
|
[2025-03-10 16:17:43,563][00323] Avg episode rewards: #0: 32.598, true rewards: #0: 13.099 |
|
[2025-03-10 16:17:43,564][00323] Avg episode reward: 32.598, avg true_objective: 13.099 |
|
[2025-03-10 16:17:43,594][00323] Num frames 10500... |
|
[2025-03-10 16:17:43,721][00323] Num frames 10600... |
|
[2025-03-10 16:17:43,852][00323] Num frames 10700... |
|
[2025-03-10 16:17:43,980][00323] Num frames 10800... |
|
[2025-03-10 16:17:44,114][00323] Num frames 10900... |
|
[2025-03-10 16:17:44,249][00323] Num frames 11000... |
|
[2025-03-10 16:17:44,379][00323] Num frames 11100... |
|
[2025-03-10 16:17:44,512][00323] Num frames 11200... |
|
[2025-03-10 16:17:44,643][00323] Num frames 11300... |
|
[2025-03-10 16:17:44,781][00323] Num frames 11400... |
|
[2025-03-10 16:17:44,913][00323] Num frames 11500... |
|
[2025-03-10 16:17:44,992][00323] Avg episode rewards: #0: 31.464, true rewards: #0: 12.798 |
|
[2025-03-10 16:17:44,993][00323] Avg episode reward: 31.464, avg true_objective: 12.798 |
|
[2025-03-10 16:17:45,101][00323] Num frames 11600... |
|
[2025-03-10 16:17:45,236][00323] Num frames 11700... |
|
[2025-03-10 16:17:45,366][00323] Num frames 11800... |
|
[2025-03-10 16:17:45,492][00323] Num frames 11900... |
|
[2025-03-10 16:17:45,616][00323] Num frames 12000... |
|
[2025-03-10 16:17:45,741][00323] Num frames 12100... |
|
[2025-03-10 16:17:45,870][00323] Avg episode rewards: #0: 29.358, true rewards: #0: 12.158 |
|
[2025-03-10 16:17:45,871][00323] Avg episode reward: 29.358, avg true_objective: 12.158 |
|
[2025-03-10 16:18:57,637][00323] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|