|
[2023-02-24 12:51:06,275][00555] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2023-02-24 12:51:06,279][00555] Rollout worker 0 uses device cpu |
|
[2023-02-24 12:51:06,282][00555] Rollout worker 1 uses device cpu |
|
[2023-02-24 12:51:06,283][00555] Rollout worker 2 uses device cpu |
|
[2023-02-24 12:51:06,285][00555] Rollout worker 3 uses device cpu |
|
[2023-02-24 12:51:06,288][00555] Rollout worker 4 uses device cpu |
|
[2023-02-24 12:51:06,291][00555] Rollout worker 5 uses device cpu |
|
[2023-02-24 12:51:06,294][00555] Rollout worker 6 uses device cpu |
|
[2023-02-24 12:51:06,296][00555] Rollout worker 7 uses device cpu |
|
[2023-02-24 12:51:06,678][00555] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-24 12:51:06,681][00555] InferenceWorker_p0-w0: min num requests: 2 |
|
[2023-02-24 12:51:06,730][00555] Starting all processes... |
|
[2023-02-24 12:51:06,732][00555] Starting process learner_proc0 |
|
[2023-02-24 12:51:06,831][00555] Starting all processes... |
|
[2023-02-24 12:51:06,901][00555] Starting process inference_proc0-0 |
|
[2023-02-24 12:51:06,903][00555] Starting process rollout_proc0 |
|
[2023-02-24 12:51:06,904][00555] Starting process rollout_proc1 |
|
[2023-02-24 12:51:06,904][00555] Starting process rollout_proc2 |
|
[2023-02-24 12:51:06,904][00555] Starting process rollout_proc3 |
|
[2023-02-24 12:51:06,904][00555] Starting process rollout_proc4 |
|
[2023-02-24 12:51:06,904][00555] Starting process rollout_proc5 |
|
[2023-02-24 12:51:06,904][00555] Starting process rollout_proc6 |
|
[2023-02-24 12:51:06,904][00555] Starting process rollout_proc7 |
|
[2023-02-24 12:51:17,448][12811] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-24 12:51:17,457][12811] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2023-02-24 12:51:18,065][12827] Worker 1 uses CPU cores [1] |
|
[2023-02-24 12:51:18,130][12833] Worker 6 uses CPU cores [0] |
|
[2023-02-24 12:51:18,127][12830] Worker 4 uses CPU cores [0] |
|
[2023-02-24 12:51:18,190][12825] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-24 12:51:18,193][12825] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2023-02-24 12:51:18,369][12828] Worker 2 uses CPU cores [0] |
|
[2023-02-24 12:51:18,413][12831] Worker 5 uses CPU cores [1] |
|
[2023-02-24 12:51:18,441][12832] Worker 7 uses CPU cores [1] |
|
[2023-02-24 12:51:18,459][12829] Worker 3 uses CPU cores [1] |
|
[2023-02-24 12:51:18,583][12826] Worker 0 uses CPU cores [0] |
|
[2023-02-24 12:51:18,684][12825] Num visible devices: 1 |
|
[2023-02-24 12:51:18,685][12811] Num visible devices: 1 |
|
[2023-02-24 12:51:18,692][12811] Starting seed is not provided |
|
[2023-02-24 12:51:18,693][12811] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-24 12:51:18,694][12811] Initializing actor-critic model on device cuda:0 |
|
[2023-02-24 12:51:18,695][12811] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-02-24 12:51:18,698][12811] RunningMeanStd input shape: (1,) |
|
[2023-02-24 12:51:18,718][12811] ConvEncoder: input_channels=3 |
|
[2023-02-24 12:51:19,061][12811] Conv encoder output size: 512 |
|
[2023-02-24 12:51:19,062][12811] Policy head output size: 512 |
|
[2023-02-24 12:51:19,120][12811] Created Actor Critic model with architecture: |
|
[2023-02-24 12:51:19,120][12811] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2023-02-24 12:51:26,522][12811] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2023-02-24 12:51:26,523][12811] No checkpoints found |
|
[2023-02-24 12:51:26,523][12811] Did not load from checkpoint, starting from scratch! |
|
[2023-02-24 12:51:26,523][12811] Initialized policy 0 weights for model version 0 |
|
[2023-02-24 12:51:26,529][12811] LearnerWorker_p0 finished initialization! |
|
[2023-02-24 12:51:26,532][12811] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-24 12:51:26,651][12825] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-02-24 12:51:26,652][12825] RunningMeanStd input shape: (1,) |
|
[2023-02-24 12:51:26,659][00555] Heartbeat connected on Batcher_0 |
|
[2023-02-24 12:51:26,671][12825] ConvEncoder: input_channels=3 |
|
[2023-02-24 12:51:26,672][00555] Heartbeat connected on LearnerWorker_p0 |
|
[2023-02-24 12:51:26,691][00555] Heartbeat connected on RolloutWorker_w0 |
|
[2023-02-24 12:51:26,702][00555] Heartbeat connected on RolloutWorker_w1 |
|
[2023-02-24 12:51:26,705][00555] Heartbeat connected on RolloutWorker_w2 |
|
[2023-02-24 12:51:26,710][00555] Heartbeat connected on RolloutWorker_w3 |
|
[2023-02-24 12:51:26,716][00555] Heartbeat connected on RolloutWorker_w4 |
|
[2023-02-24 12:51:26,721][00555] Heartbeat connected on RolloutWorker_w5 |
|
[2023-02-24 12:51:26,726][00555] Heartbeat connected on RolloutWorker_w6 |
|
[2023-02-24 12:51:26,732][00555] Heartbeat connected on RolloutWorker_w7 |
|
[2023-02-24 12:51:26,811][12825] Conv encoder output size: 512 |
|
[2023-02-24 12:51:26,811][12825] Policy head output size: 512 |
|
[2023-02-24 12:51:29,207][00555] Inference worker 0-0 is ready! |
|
[2023-02-24 12:51:29,209][00555] All inference workers are ready! Signal rollout workers to start! |
|
[2023-02-24 12:51:29,214][00555] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2023-02-24 12:51:29,327][12826] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-24 12:51:29,337][12830] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-24 12:51:29,342][12828] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-24 12:51:29,366][12833] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-24 12:51:29,382][12832] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-24 12:51:29,383][12827] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-24 12:51:29,404][12829] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-24 12:51:29,426][12831] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-24 12:51:30,841][12832] Decorrelating experience for 0 frames... |
|
[2023-02-24 12:51:30,843][12831] Decorrelating experience for 0 frames... |
|
[2023-02-24 12:51:31,070][12830] Decorrelating experience for 0 frames... |
|
[2023-02-24 12:51:31,082][12826] Decorrelating experience for 0 frames... |
|
[2023-02-24 12:51:31,074][12833] Decorrelating experience for 0 frames... |
|
[2023-02-24 12:51:31,308][00555] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-02-24 12:51:31,642][12832] Decorrelating experience for 32 frames... |
|
[2023-02-24 12:51:32,336][12830] Decorrelating experience for 32 frames... |
|
[2023-02-24 12:51:32,338][12833] Decorrelating experience for 32 frames... |
|
[2023-02-24 12:51:32,429][12827] Decorrelating experience for 0 frames... |
|
[2023-02-24 12:51:32,475][12831] Decorrelating experience for 32 frames... |
|
[2023-02-24 12:51:33,392][12832] Decorrelating experience for 64 frames... |
|
[2023-02-24 12:51:33,528][12827] Decorrelating experience for 32 frames... |
|
[2023-02-24 12:51:33,982][12833] Decorrelating experience for 64 frames... |
|
[2023-02-24 12:51:33,983][12830] Decorrelating experience for 64 frames... |
|
[2023-02-24 12:51:35,017][12832] Decorrelating experience for 96 frames... |
|
[2023-02-24 12:51:35,065][12828] Decorrelating experience for 0 frames... |
|
[2023-02-24 12:51:35,306][12827] Decorrelating experience for 64 frames... |
|
[2023-02-24 12:51:35,527][12830] Decorrelating experience for 96 frames... |
|
[2023-02-24 12:51:35,531][12833] Decorrelating experience for 96 frames... |
|
[2023-02-24 12:51:36,225][12828] Decorrelating experience for 32 frames... |
|
[2023-02-24 12:51:36,309][00555] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-02-24 12:51:37,147][12828] Decorrelating experience for 64 frames... |
|
[2023-02-24 12:51:37,237][12831] Decorrelating experience for 64 frames... |
|
[2023-02-24 12:51:37,513][12827] Decorrelating experience for 96 frames... |
|
[2023-02-24 12:51:37,991][12826] Decorrelating experience for 32 frames... |
|
[2023-02-24 12:51:38,029][12828] Decorrelating experience for 96 frames... |
|
[2023-02-24 12:51:38,365][12831] Decorrelating experience for 96 frames... |
|
[2023-02-24 12:51:38,526][12826] Decorrelating experience for 64 frames... |
|
[2023-02-24 12:51:38,943][12826] Decorrelating experience for 96 frames... |
|
[2023-02-24 12:51:39,053][12829] Decorrelating experience for 0 frames... |
|
[2023-02-24 12:51:39,376][12829] Decorrelating experience for 32 frames... |
|
[2023-02-24 12:51:39,741][12829] Decorrelating experience for 64 frames... |
|
[2023-02-24 12:51:41,308][00555] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 134.8. Samples: 1348. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-02-24 12:51:41,315][00555] Avg episode reward: [(0, '1.565')] |
|
[2023-02-24 12:51:42,066][12829] Decorrelating experience for 96 frames... |
|
[2023-02-24 12:51:43,212][12811] Signal inference workers to stop experience collection... |
|
[2023-02-24 12:51:43,244][12825] InferenceWorker_p0-w0: stopping experience collection |
|
[2023-02-24 12:51:45,733][12811] Signal inference workers to resume experience collection... |
|
[2023-02-24 12:51:45,736][12825] InferenceWorker_p0-w0: resuming experience collection |
|
[2023-02-24 12:51:46,308][00555] Fps is (10 sec: 409.6, 60 sec: 273.1, 300 sec: 273.1). Total num frames: 4096. Throughput: 0: 181.9. Samples: 2728. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2023-02-24 12:51:46,314][00555] Avg episode reward: [(0, '2.545')] |
|
[2023-02-24 12:51:51,308][00555] Fps is (10 sec: 2048.0, 60 sec: 1024.0, 300 sec: 1024.0). Total num frames: 20480. Throughput: 0: 266.0. Samples: 5320. Policy #0 lag: (min: 0.0, avg: 1.1, max: 3.0) |
|
[2023-02-24 12:51:51,314][00555] Avg episode reward: [(0, '3.485')] |
|
[2023-02-24 12:51:56,308][00555] Fps is (10 sec: 2867.2, 60 sec: 1310.7, 300 sec: 1310.7). Total num frames: 32768. Throughput: 0: 361.4. Samples: 9036. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 12:51:56,311][00555] Avg episode reward: [(0, '3.764')] |
|
[2023-02-24 12:51:59,044][12825] Updated weights for policy 0, policy_version 10 (0.0373) |
|
[2023-02-24 12:52:01,310][00555] Fps is (10 sec: 2866.7, 60 sec: 1638.3, 300 sec: 1638.3). Total num frames: 49152. Throughput: 0: 369.6. Samples: 11088. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:52:01,314][00555] Avg episode reward: [(0, '4.388')] |
|
[2023-02-24 12:52:06,308][00555] Fps is (10 sec: 3686.4, 60 sec: 1989.5, 300 sec: 1989.5). Total num frames: 69632. Throughput: 0: 492.5. Samples: 17238. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 12:52:06,315][00555] Avg episode reward: [(0, '4.505')] |
|
[2023-02-24 12:52:08,954][12825] Updated weights for policy 0, policy_version 20 (0.0023) |
|
[2023-02-24 12:52:11,308][00555] Fps is (10 sec: 3687.1, 60 sec: 2150.4, 300 sec: 2150.4). Total num frames: 86016. Throughput: 0: 568.6. Samples: 22744. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 12:52:11,313][00555] Avg episode reward: [(0, '4.375')] |
|
[2023-02-24 12:52:16,308][00555] Fps is (10 sec: 2867.2, 60 sec: 2184.5, 300 sec: 2184.5). Total num frames: 98304. Throughput: 0: 549.1. Samples: 24710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:52:16,313][00555] Avg episode reward: [(0, '4.310')] |
|
[2023-02-24 12:52:21,308][00555] Fps is (10 sec: 2867.2, 60 sec: 2293.8, 300 sec: 2293.8). Total num frames: 114688. Throughput: 0: 641.9. Samples: 28884. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:52:21,315][00555] Avg episode reward: [(0, '4.451')] |
|
[2023-02-24 12:52:21,324][12811] Saving new best policy, reward=4.451! |
|
[2023-02-24 12:52:22,782][12825] Updated weights for policy 0, policy_version 30 (0.0029) |
|
[2023-02-24 12:52:26,308][00555] Fps is (10 sec: 3686.4, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 135168. Throughput: 0: 749.7. Samples: 35084. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:52:26,311][00555] Avg episode reward: [(0, '4.414')] |
|
[2023-02-24 12:52:31,309][00555] Fps is (10 sec: 4095.8, 60 sec: 2594.1, 300 sec: 2594.1). Total num frames: 155648. Throughput: 0: 791.0. Samples: 38322. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 12:52:31,313][00555] Avg episode reward: [(0, '4.392')] |
|
[2023-02-24 12:52:33,838][12825] Updated weights for policy 0, policy_version 40 (0.0018) |
|
[2023-02-24 12:52:36,309][00555] Fps is (10 sec: 3276.7, 60 sec: 2798.9, 300 sec: 2583.6). Total num frames: 167936. Throughput: 0: 825.2. Samples: 42456. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 12:52:36,315][00555] Avg episode reward: [(0, '4.553')] |
|
[2023-02-24 12:52:36,317][12811] Saving new best policy, reward=4.553! |
|
[2023-02-24 12:52:41,311][00555] Fps is (10 sec: 2457.0, 60 sec: 3003.6, 300 sec: 2574.5). Total num frames: 180224. Throughput: 0: 827.4. Samples: 46272. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:52:41,315][00555] Avg episode reward: [(0, '4.701')] |
|
[2023-02-24 12:52:41,330][12811] Saving new best policy, reward=4.701! |
|
[2023-02-24 12:52:46,308][00555] Fps is (10 sec: 3276.9, 60 sec: 3276.8, 300 sec: 2676.1). Total num frames: 200704. Throughput: 0: 844.8. Samples: 49102. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:52:46,311][00555] Avg episode reward: [(0, '4.461')] |
|
[2023-02-24 12:52:46,828][12825] Updated weights for policy 0, policy_version 50 (0.0039) |
|
[2023-02-24 12:52:51,308][00555] Fps is (10 sec: 4097.2, 60 sec: 3345.1, 300 sec: 2764.8). Total num frames: 221184. Throughput: 0: 850.5. Samples: 55510. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:52:51,314][00555] Avg episode reward: [(0, '4.467')] |
|
[2023-02-24 12:52:56,316][00555] Fps is (10 sec: 3274.4, 60 sec: 3344.7, 300 sec: 2746.5). Total num frames: 233472. Throughput: 0: 825.6. Samples: 59904. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:52:56,318][00555] Avg episode reward: [(0, '4.648')] |
|
[2023-02-24 12:52:59,318][12825] Updated weights for policy 0, policy_version 60 (0.0024) |
|
[2023-02-24 12:53:01,310][00555] Fps is (10 sec: 2866.8, 60 sec: 3345.1, 300 sec: 2776.1). Total num frames: 249856. Throughput: 0: 825.7. Samples: 61866. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 12:53:01,316][00555] Avg episode reward: [(0, '4.585')] |
|
[2023-02-24 12:53:01,330][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000061_249856.pth... |
|
[2023-02-24 12:53:06,309][00555] Fps is (10 sec: 3279.1, 60 sec: 3276.8, 300 sec: 2802.5). Total num frames: 266240. Throughput: 0: 840.5. Samples: 66708. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:53:06,318][00555] Avg episode reward: [(0, '4.567')] |
|
[2023-02-24 12:53:10,251][12825] Updated weights for policy 0, policy_version 70 (0.0016) |
|
[2023-02-24 12:53:11,308][00555] Fps is (10 sec: 4096.6, 60 sec: 3413.3, 300 sec: 2908.2). Total num frames: 290816. Throughput: 0: 845.0. Samples: 73108. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:53:11,311][00555] Avg episode reward: [(0, '4.379')] |
|
[2023-02-24 12:53:16,308][00555] Fps is (10 sec: 3686.5, 60 sec: 3413.3, 300 sec: 2886.7). Total num frames: 303104. Throughput: 0: 832.0. Samples: 75760. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2023-02-24 12:53:16,314][00555] Avg episode reward: [(0, '4.299')] |
|
[2023-02-24 12:53:21,311][00555] Fps is (10 sec: 2866.6, 60 sec: 3413.2, 300 sec: 2904.4). Total num frames: 319488. Throughput: 0: 830.4. Samples: 79824. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 12:53:21,313][00555] Avg episode reward: [(0, '4.329')] |
|
[2023-02-24 12:53:23,985][12825] Updated weights for policy 0, policy_version 80 (0.0037) |
|
[2023-02-24 12:53:26,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 2920.6). Total num frames: 335872. Throughput: 0: 857.5. Samples: 84858. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-24 12:53:26,311][00555] Avg episode reward: [(0, '4.273')] |
|
[2023-02-24 12:53:31,308][00555] Fps is (10 sec: 3687.2, 60 sec: 3345.1, 300 sec: 2969.6). Total num frames: 356352. Throughput: 0: 867.1. Samples: 88122. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 12:53:31,314][00555] Avg episode reward: [(0, '4.190')] |
|
[2023-02-24 12:53:33,735][12825] Updated weights for policy 0, policy_version 90 (0.0016) |
|
[2023-02-24 12:53:36,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 2981.9). Total num frames: 372736. Throughput: 0: 850.7. Samples: 93792. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-24 12:53:36,313][00555] Avg episode reward: [(0, '4.301')] |
|
[2023-02-24 12:53:41,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3481.8, 300 sec: 2993.2). Total num frames: 389120. Throughput: 0: 842.5. Samples: 97812. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:53:41,311][00555] Avg episode reward: [(0, '4.499')] |
|
[2023-02-24 12:53:46,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3003.7). Total num frames: 405504. Throughput: 0: 843.9. Samples: 99842. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-24 12:53:46,311][00555] Avg episode reward: [(0, '4.589')] |
|
[2023-02-24 12:53:47,275][12825] Updated weights for policy 0, policy_version 100 (0.0018) |
|
[2023-02-24 12:53:51,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3042.7). Total num frames: 425984. Throughput: 0: 878.5. Samples: 106242. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:53:51,311][00555] Avg episode reward: [(0, '4.498')] |
|
[2023-02-24 12:53:56,308][00555] Fps is (10 sec: 3686.3, 60 sec: 3482.0, 300 sec: 3050.8). Total num frames: 442368. Throughput: 0: 862.1. Samples: 111902. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:53:56,311][00555] Avg episode reward: [(0, '4.649')] |
|
[2023-02-24 12:53:58,370][12825] Updated weights for policy 0, policy_version 110 (0.0021) |
|
[2023-02-24 12:54:01,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3413.4, 300 sec: 3031.0). Total num frames: 454656. Throughput: 0: 846.5. Samples: 113852. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 12:54:01,311][00555] Avg episode reward: [(0, '4.570')] |
|
[2023-02-24 12:54:06,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3413.4, 300 sec: 3039.0). Total num frames: 471040. Throughput: 0: 844.5. Samples: 117824. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:54:06,311][00555] Avg episode reward: [(0, '4.613')] |
|
[2023-02-24 12:54:10,490][12825] Updated weights for policy 0, policy_version 120 (0.0025) |
|
[2023-02-24 12:54:11,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3072.0). Total num frames: 491520. Throughput: 0: 875.9. Samples: 124274. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-24 12:54:11,311][00555] Avg episode reward: [(0, '4.577')] |
|
[2023-02-24 12:54:16,309][00555] Fps is (10 sec: 4095.9, 60 sec: 3481.6, 300 sec: 3103.0). Total num frames: 512000. Throughput: 0: 875.6. Samples: 127526. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:54:16,313][00555] Avg episode reward: [(0, '4.561')] |
|
[2023-02-24 12:54:21,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3108.1). Total num frames: 528384. Throughput: 0: 849.3. Samples: 132012. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:54:21,310][00555] Avg episode reward: [(0, '4.606')] |
|
[2023-02-24 12:54:22,706][12825] Updated weights for policy 0, policy_version 130 (0.0013) |
|
[2023-02-24 12:54:26,309][00555] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3089.5). Total num frames: 540672. Throughput: 0: 851.4. Samples: 136126. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:54:26,312][00555] Avg episode reward: [(0, '4.573')] |
|
[2023-02-24 12:54:31,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3117.5). Total num frames: 561152. Throughput: 0: 875.7. Samples: 139248. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:54:31,312][00555] Avg episode reward: [(0, '4.571')] |
|
[2023-02-24 12:54:33,462][12825] Updated weights for policy 0, policy_version 140 (0.0027) |
|
[2023-02-24 12:54:36,308][00555] Fps is (10 sec: 4096.1, 60 sec: 3481.6, 300 sec: 3144.0). Total num frames: 581632. Throughput: 0: 876.2. Samples: 145670. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-24 12:54:36,311][00555] Avg episode reward: [(0, '4.485')] |
|
[2023-02-24 12:54:41,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3147.5). Total num frames: 598016. Throughput: 0: 852.8. Samples: 150276. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:54:41,314][00555] Avg episode reward: [(0, '4.333')] |
|
[2023-02-24 12:54:46,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3129.8). Total num frames: 610304. Throughput: 0: 854.8. Samples: 152316. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:54:46,320][00555] Avg episode reward: [(0, '4.341')] |
|
[2023-02-24 12:54:46,925][12825] Updated weights for policy 0, policy_version 150 (0.0032) |
|
[2023-02-24 12:54:51,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3153.9). Total num frames: 630784. Throughput: 0: 884.7. Samples: 157634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 12:54:51,311][00555] Avg episode reward: [(0, '4.335')] |
|
[2023-02-24 12:54:56,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3176.9). Total num frames: 651264. Throughput: 0: 884.7. Samples: 164086. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 12:54:56,311][00555] Avg episode reward: [(0, '4.319')] |
|
[2023-02-24 12:54:56,598][12825] Updated weights for policy 0, policy_version 160 (0.0017) |
|
[2023-02-24 12:55:01,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3179.3). Total num frames: 667648. Throughput: 0: 866.8. Samples: 166530. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:55:01,313][00555] Avg episode reward: [(0, '4.467')] |
|
[2023-02-24 12:55:01,327][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000163_667648.pth... |
|
[2023-02-24 12:55:06,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3162.5). Total num frames: 679936. Throughput: 0: 855.3. Samples: 170502. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:55:06,311][00555] Avg episode reward: [(0, '4.575')] |
|
[2023-02-24 12:55:10,164][12825] Updated weights for policy 0, policy_version 170 (0.0028) |
|
[2023-02-24 12:55:11,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3183.7). Total num frames: 700416. Throughput: 0: 882.2. Samples: 175824. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:55:11,311][00555] Avg episode reward: [(0, '4.498')] |
|
[2023-02-24 12:55:16,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3204.0). Total num frames: 720896. Throughput: 0: 883.1. Samples: 178986. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:55:16,311][00555] Avg episode reward: [(0, '4.480')] |
|
[2023-02-24 12:55:20,384][12825] Updated weights for policy 0, policy_version 180 (0.0021) |
|
[2023-02-24 12:55:21,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3205.6). Total num frames: 737280. Throughput: 0: 867.4. Samples: 184702. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:55:21,313][00555] Avg episode reward: [(0, '4.542')] |
|
[2023-02-24 12:55:26,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3189.7). Total num frames: 749568. Throughput: 0: 855.6. Samples: 188776. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 12:55:26,312][00555] Avg episode reward: [(0, '4.514')] |
|
[2023-02-24 12:55:31,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3191.5). Total num frames: 765952. Throughput: 0: 857.7. Samples: 190914. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 12:55:31,316][00555] Avg episode reward: [(0, '4.640')] |
|
[2023-02-24 12:55:33,310][12825] Updated weights for policy 0, policy_version 190 (0.0057) |
|
[2023-02-24 12:55:36,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3226.6). Total num frames: 790528. Throughput: 0: 879.7. Samples: 197222. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:55:36,317][00555] Avg episode reward: [(0, '4.756')] |
|
[2023-02-24 12:55:36,321][12811] Saving new best policy, reward=4.756! |
|
[2023-02-24 12:55:41,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3227.6). Total num frames: 806912. Throughput: 0: 865.0. Samples: 203010. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:55:41,314][00555] Avg episode reward: [(0, '4.703')] |
|
[2023-02-24 12:55:44,700][12825] Updated weights for policy 0, policy_version 200 (0.0017) |
|
[2023-02-24 12:55:46,311][00555] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3228.6). Total num frames: 823296. Throughput: 0: 855.7. Samples: 205036. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:55:46,315][00555] Avg episode reward: [(0, '4.722')] |
|
[2023-02-24 12:55:51,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3213.8). Total num frames: 835584. Throughput: 0: 862.4. Samples: 209308. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 12:55:51,312][00555] Avg episode reward: [(0, '4.619')] |
|
[2023-02-24 12:55:56,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3230.4). Total num frames: 856064. Throughput: 0: 883.7. Samples: 215592. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:55:56,314][00555] Avg episode reward: [(0, '4.439')] |
|
[2023-02-24 12:55:56,357][12825] Updated weights for policy 0, policy_version 210 (0.0020) |
|
[2023-02-24 12:56:01,315][00555] Fps is (10 sec: 4093.4, 60 sec: 3481.2, 300 sec: 3246.4). Total num frames: 876544. Throughput: 0: 884.2. Samples: 218782. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:56:01,318][00555] Avg episode reward: [(0, '4.523')] |
|
[2023-02-24 12:56:06,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3247.0). Total num frames: 892928. Throughput: 0: 858.6. Samples: 223340. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:56:06,313][00555] Avg episode reward: [(0, '4.502')] |
|
[2023-02-24 12:56:08,607][12825] Updated weights for policy 0, policy_version 220 (0.0034) |
|
[2023-02-24 12:56:11,311][00555] Fps is (10 sec: 2868.4, 60 sec: 3413.2, 300 sec: 3232.9). Total num frames: 905216. Throughput: 0: 862.7. Samples: 227600. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-24 12:56:11,317][00555] Avg episode reward: [(0, '4.477')] |
|
[2023-02-24 12:56:16,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3248.1). Total num frames: 925696. Throughput: 0: 882.5. Samples: 230628. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 12:56:16,312][00555] Avg episode reward: [(0, '4.622')] |
|
[2023-02-24 12:56:19,211][12825] Updated weights for policy 0, policy_version 230 (0.0014) |
|
[2023-02-24 12:56:21,308][00555] Fps is (10 sec: 4506.7, 60 sec: 3549.9, 300 sec: 3276.8). Total num frames: 950272. Throughput: 0: 888.9. Samples: 237224. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 12:56:21,311][00555] Avg episode reward: [(0, '4.504')] |
|
[2023-02-24 12:56:26,314][00555] Fps is (10 sec: 3684.4, 60 sec: 3549.5, 300 sec: 3262.9). Total num frames: 962560. Throughput: 0: 862.1. Samples: 241808. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:56:26,319][00555] Avg episode reward: [(0, '4.446')] |
|
[2023-02-24 12:56:31,311][00555] Fps is (10 sec: 2456.9, 60 sec: 3481.4, 300 sec: 3304.5). Total num frames: 974848. Throughput: 0: 862.8. Samples: 243866. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 12:56:31,316][00555] Avg episode reward: [(0, '4.607')] |
|
[2023-02-24 12:56:33,359][12825] Updated weights for policy 0, policy_version 240 (0.0033) |
|
[2023-02-24 12:56:36,308][00555] Fps is (10 sec: 2868.7, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 991232. Throughput: 0: 866.7. Samples: 248308. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:56:36,311][00555] Avg episode reward: [(0, '4.547')] |
|
[2023-02-24 12:56:41,308][00555] Fps is (10 sec: 3687.4, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 1011712. Throughput: 0: 862.6. Samples: 254408. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:56:41,310][00555] Avg episode reward: [(0, '4.536')] |
|
[2023-02-24 12:56:43,518][12825] Updated weights for policy 0, policy_version 250 (0.0018) |
|
[2023-02-24 12:56:46,310][00555] Fps is (10 sec: 3686.0, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 1028096. Throughput: 0: 847.7. Samples: 256922. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:56:46,316][00555] Avg episode reward: [(0, '4.555')] |
|
[2023-02-24 12:56:51,316][00555] Fps is (10 sec: 2865.0, 60 sec: 3412.9, 300 sec: 3415.6). Total num frames: 1040384. Throughput: 0: 833.5. Samples: 260852. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:56:51,324][00555] Avg episode reward: [(0, '4.582')] |
|
[2023-02-24 12:56:56,308][00555] Fps is (10 sec: 2867.5, 60 sec: 3345.1, 300 sec: 3415.7). Total num frames: 1056768. Throughput: 0: 841.0. Samples: 265444. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 12:56:56,311][00555] Avg episode reward: [(0, '4.644')] |
|
[2023-02-24 12:56:57,510][12825] Updated weights for policy 0, policy_version 260 (0.0029) |
|
[2023-02-24 12:57:01,309][00555] Fps is (10 sec: 3689.1, 60 sec: 3345.4, 300 sec: 3415.6). Total num frames: 1077248. Throughput: 0: 842.6. Samples: 268546. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:57:01,317][00555] Avg episode reward: [(0, '4.605')] |
|
[2023-02-24 12:57:01,331][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000263_1077248.pth... |
|
[2023-02-24 12:57:01,512][12811] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000061_249856.pth |
|
[2023-02-24 12:57:06,308][00555] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1097728. Throughput: 0: 824.0. Samples: 274304. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 12:57:06,313][00555] Avg episode reward: [(0, '4.681')] |
|
[2023-02-24 12:57:09,449][12825] Updated weights for policy 0, policy_version 270 (0.0021) |
|
[2023-02-24 12:57:11,308][00555] Fps is (10 sec: 3277.0, 60 sec: 3413.5, 300 sec: 3429.5). Total num frames: 1110016. Throughput: 0: 807.7. Samples: 278152. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 12:57:11,316][00555] Avg episode reward: [(0, '4.703')] |
|
[2023-02-24 12:57:16,308][00555] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 1122304. Throughput: 0: 806.5. Samples: 280156. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 12:57:16,316][00555] Avg episode reward: [(0, '4.554')] |
|
[2023-02-24 12:57:21,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3415.6). Total num frames: 1142784. Throughput: 0: 840.4. Samples: 286126. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:57:21,316][00555] Avg episode reward: [(0, '4.635')] |
|
[2023-02-24 12:57:21,440][12825] Updated weights for policy 0, policy_version 280 (0.0016) |
|
[2023-02-24 12:57:26,315][00555] Fps is (10 sec: 4093.4, 60 sec: 3345.0, 300 sec: 3415.6). Total num frames: 1163264. Throughput: 0: 840.2. Samples: 292224. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:57:26,322][00555] Avg episode reward: [(0, '4.795')] |
|
[2023-02-24 12:57:26,380][12811] Saving new best policy, reward=4.795! |
|
[2023-02-24 12:57:31,309][00555] Fps is (10 sec: 3686.3, 60 sec: 3413.5, 300 sec: 3429.5). Total num frames: 1179648. Throughput: 0: 828.3. Samples: 294194. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:57:31,312][00555] Avg episode reward: [(0, '4.697')] |
|
[2023-02-24 12:57:34,177][12825] Updated weights for policy 0, policy_version 290 (0.0018) |
|
[2023-02-24 12:57:36,316][00555] Fps is (10 sec: 2866.8, 60 sec: 3344.6, 300 sec: 3429.5). Total num frames: 1191936. Throughput: 0: 830.4. Samples: 298220. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:57:36,319][00555] Avg episode reward: [(0, '4.938')] |
|
[2023-02-24 12:57:36,327][12811] Saving new best policy, reward=4.938! |
|
[2023-02-24 12:57:41,308][00555] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3429.5). Total num frames: 1212416. Throughput: 0: 854.2. Samples: 303884. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:57:41,315][00555] Avg episode reward: [(0, '4.690')] |
|
[2023-02-24 12:57:44,780][12825] Updated weights for policy 0, policy_version 300 (0.0020) |
|
[2023-02-24 12:57:46,308][00555] Fps is (10 sec: 4099.3, 60 sec: 3413.4, 300 sec: 3429.5). Total num frames: 1232896. Throughput: 0: 857.2. Samples: 307118. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:57:46,311][00555] Avg episode reward: [(0, '4.674')] |
|
[2023-02-24 12:57:51,316][00555] Fps is (10 sec: 3683.7, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 1249280. Throughput: 0: 845.6. Samples: 312364. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:57:51,328][00555] Avg episode reward: [(0, '5.129')] |
|
[2023-02-24 12:57:51,352][12811] Saving new best policy, reward=5.129! |
|
[2023-02-24 12:57:56,310][00555] Fps is (10 sec: 2866.8, 60 sec: 3413.2, 300 sec: 3429.5). Total num frames: 1261568. Throughput: 0: 849.1. Samples: 316362. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:57:56,315][00555] Avg episode reward: [(0, '5.140')] |
|
[2023-02-24 12:57:56,317][12811] Saving new best policy, reward=5.140! |
|
[2023-02-24 12:57:58,382][12825] Updated weights for policy 0, policy_version 310 (0.0028) |
|
[2023-02-24 12:58:01,308][00555] Fps is (10 sec: 2869.3, 60 sec: 3345.1, 300 sec: 3429.5). Total num frames: 1277952. Throughput: 0: 859.5. Samples: 318832. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-24 12:58:01,318][00555] Avg episode reward: [(0, '5.029')] |
|
[2023-02-24 12:58:06,308][00555] Fps is (10 sec: 4096.6, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1302528. Throughput: 0: 868.2. Samples: 325196. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:58:06,315][00555] Avg episode reward: [(0, '5.035')] |
|
[2023-02-24 12:58:08,067][12825] Updated weights for policy 0, policy_version 320 (0.0019) |
|
[2023-02-24 12:58:11,309][00555] Fps is (10 sec: 4095.9, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 1318912. Throughput: 0: 851.0. Samples: 330516. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:58:11,312][00555] Avg episode reward: [(0, '4.851')] |
|
[2023-02-24 12:58:16,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3429.6). Total num frames: 1331200. Throughput: 0: 851.3. Samples: 332502. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:58:16,316][00555] Avg episode reward: [(0, '4.614')] |
|
[2023-02-24 12:58:21,308][00555] Fps is (10 sec: 2867.3, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1347584. Throughput: 0: 863.0. Samples: 337050. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:58:21,315][00555] Avg episode reward: [(0, '4.744')] |
|
[2023-02-24 12:58:21,765][12825] Updated weights for policy 0, policy_version 330 (0.0046) |
|
[2023-02-24 12:58:26,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3413.7, 300 sec: 3429.5). Total num frames: 1368064. Throughput: 0: 869.3. Samples: 343004. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:58:26,314][00555] Avg episode reward: [(0, '4.888')] |
|
[2023-02-24 12:58:31,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1384448. Throughput: 0: 866.2. Samples: 346096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:58:31,316][00555] Avg episode reward: [(0, '4.944')] |
|
[2023-02-24 12:58:33,458][12825] Updated weights for policy 0, policy_version 340 (0.0022) |
|
[2023-02-24 12:58:36,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3413.8, 300 sec: 3415.6). Total num frames: 1396736. Throughput: 0: 836.7. Samples: 350010. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:58:36,313][00555] Avg episode reward: [(0, '4.863')] |
|
[2023-02-24 12:58:41,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 1413120. Throughput: 0: 847.5. Samples: 354500. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-24 12:58:41,311][00555] Avg episode reward: [(0, '4.719')] |
|
[2023-02-24 12:58:45,358][12825] Updated weights for policy 0, policy_version 350 (0.0026) |
|
[2023-02-24 12:58:46,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1437696. Throughput: 0: 865.4. Samples: 357774. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:58:46,311][00555] Avg episode reward: [(0, '4.591')] |
|
[2023-02-24 12:58:51,308][00555] Fps is (10 sec: 4505.7, 60 sec: 3482.0, 300 sec: 3443.4). Total num frames: 1458176. Throughput: 0: 872.5. Samples: 364460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 12:58:51,315][00555] Avg episode reward: [(0, '4.749')] |
|
[2023-02-24 12:58:56,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3443.4). Total num frames: 1470464. Throughput: 0: 846.2. Samples: 368594. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:58:56,311][00555] Avg episode reward: [(0, '4.663')] |
|
[2023-02-24 12:58:56,987][12825] Updated weights for policy 0, policy_version 360 (0.0024) |
|
[2023-02-24 12:59:01,308][00555] Fps is (10 sec: 2457.6, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1482752. Throughput: 0: 847.2. Samples: 370628. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:59:01,314][00555] Avg episode reward: [(0, '4.572')] |
|
[2023-02-24 12:59:01,334][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000362_1482752.pth... |
|
[2023-02-24 12:59:01,523][12811] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000163_667648.pth |
|
[2023-02-24 12:59:06,311][00555] Fps is (10 sec: 3276.2, 60 sec: 3345.0, 300 sec: 3429.5). Total num frames: 1503232. Throughput: 0: 868.1. Samples: 376114. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 12:59:06,315][00555] Avg episode reward: [(0, '4.701')] |
|
[2023-02-24 12:59:08,314][12825] Updated weights for policy 0, policy_version 370 (0.0016) |
|
[2023-02-24 12:59:11,308][00555] Fps is (10 sec: 4505.6, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 1527808. Throughput: 0: 882.2. Samples: 382702. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:59:11,315][00555] Avg episode reward: [(0, '4.620')] |
|
[2023-02-24 12:59:16,308][00555] Fps is (10 sec: 3687.1, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1540096. Throughput: 0: 861.3. Samples: 384856. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:59:16,313][00555] Avg episode reward: [(0, '4.592')] |
|
[2023-02-24 12:59:21,157][12825] Updated weights for policy 0, policy_version 380 (0.0040) |
|
[2023-02-24 12:59:21,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 1556480. Throughput: 0: 867.3. Samples: 389040. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:59:21,314][00555] Avg episode reward: [(0, '4.928')] |
|
[2023-02-24 12:59:26,309][00555] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1572864. Throughput: 0: 890.8. Samples: 394584. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-24 12:59:26,318][00555] Avg episode reward: [(0, '5.074')] |
|
[2023-02-24 12:59:31,160][12825] Updated weights for policy 0, policy_version 390 (0.0023) |
|
[2023-02-24 12:59:31,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 1597440. Throughput: 0: 889.4. Samples: 397798. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:59:31,311][00555] Avg episode reward: [(0, '4.945')] |
|
[2023-02-24 12:59:36,308][00555] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3429.5). Total num frames: 1609728. Throughput: 0: 857.5. Samples: 403048. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:59:36,311][00555] Avg episode reward: [(0, '4.922')] |
|
[2023-02-24 12:59:41,309][00555] Fps is (10 sec: 2867.1, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 1626112. Throughput: 0: 858.6. Samples: 407232. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 12:59:41,319][00555] Avg episode reward: [(0, '5.099')] |
|
[2023-02-24 12:59:45,100][12825] Updated weights for policy 0, policy_version 400 (0.0024) |
|
[2023-02-24 12:59:46,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1642496. Throughput: 0: 859.9. Samples: 409322. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 12:59:46,311][00555] Avg episode reward: [(0, '5.176')] |
|
[2023-02-24 12:59:46,317][12811] Saving new best policy, reward=5.176! |
|
[2023-02-24 12:59:51,308][00555] Fps is (10 sec: 3686.5, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1662976. Throughput: 0: 875.6. Samples: 415514. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 12:59:51,314][00555] Avg episode reward: [(0, '5.063')] |
|
[2023-02-24 12:59:55,506][12825] Updated weights for policy 0, policy_version 410 (0.0023) |
|
[2023-02-24 12:59:56,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1679360. Throughput: 0: 848.0. Samples: 420860. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-24 12:59:56,318][00555] Avg episode reward: [(0, '5.049')] |
|
[2023-02-24 13:00:01,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1691648. Throughput: 0: 845.0. Samples: 422880. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-24 13:00:01,316][00555] Avg episode reward: [(0, '5.269')] |
|
[2023-02-24 13:00:01,334][12811] Saving new best policy, reward=5.269! |
|
[2023-02-24 13:00:06,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3413.4, 300 sec: 3415.6). Total num frames: 1708032. Throughput: 0: 842.6. Samples: 426958. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:00:06,311][00555] Avg episode reward: [(0, '5.366')] |
|
[2023-02-24 13:00:06,321][12811] Saving new best policy, reward=5.366! |
|
[2023-02-24 13:00:08,735][12825] Updated weights for policy 0, policy_version 420 (0.0024) |
|
[2023-02-24 13:00:11,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 1728512. Throughput: 0: 860.2. Samples: 433294. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:00:11,315][00555] Avg episode reward: [(0, '5.388')] |
|
[2023-02-24 13:00:11,327][12811] Saving new best policy, reward=5.388! |
|
[2023-02-24 13:00:16,314][00555] Fps is (10 sec: 4093.8, 60 sec: 3481.3, 300 sec: 3429.5). Total num frames: 1748992. Throughput: 0: 857.2. Samples: 436378. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:00:16,316][00555] Avg episode reward: [(0, '5.340')] |
|
[2023-02-24 13:00:20,452][12825] Updated weights for policy 0, policy_version 430 (0.0029) |
|
[2023-02-24 13:00:21,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1761280. Throughput: 0: 838.1. Samples: 440762. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:00:21,316][00555] Avg episode reward: [(0, '5.432')] |
|
[2023-02-24 13:00:21,330][12811] Saving new best policy, reward=5.432! |
|
[2023-02-24 13:00:26,308][00555] Fps is (10 sec: 2868.7, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1777664. Throughput: 0: 837.6. Samples: 444922. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:00:26,317][00555] Avg episode reward: [(0, '5.432')] |
|
[2023-02-24 13:00:31,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 1798144. Throughput: 0: 863.3. Samples: 448170. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:00:31,311][00555] Avg episode reward: [(0, '5.603')] |
|
[2023-02-24 13:00:31,319][12811] Saving new best policy, reward=5.603! |
|
[2023-02-24 13:00:32,068][12825] Updated weights for policy 0, policy_version 440 (0.0030) |
|
[2023-02-24 13:00:36,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1818624. Throughput: 0: 864.9. Samples: 454434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:00:36,318][00555] Avg episode reward: [(0, '5.593')] |
|
[2023-02-24 13:00:41,309][00555] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 1830912. Throughput: 0: 833.3. Samples: 458360. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:00:41,312][00555] Avg episode reward: [(0, '5.332')] |
|
[2023-02-24 13:00:45,647][12825] Updated weights for policy 0, policy_version 450 (0.0023) |
|
[2023-02-24 13:00:46,308][00555] Fps is (10 sec: 2457.6, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 1843200. Throughput: 0: 832.1. Samples: 460324. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:00:46,312][00555] Avg episode reward: [(0, '5.187')] |
|
[2023-02-24 13:00:51,308][00555] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 1863680. Throughput: 0: 856.6. Samples: 465506. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:00:51,312][00555] Avg episode reward: [(0, '5.429')] |
|
[2023-02-24 13:00:56,202][12825] Updated weights for policy 0, policy_version 460 (0.0017) |
|
[2023-02-24 13:00:56,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3415.7). Total num frames: 1884160. Throughput: 0: 851.7. Samples: 471620. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:00:56,318][00555] Avg episode reward: [(0, '5.580')] |
|
[2023-02-24 13:01:01,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3401.8). Total num frames: 1896448. Throughput: 0: 835.3. Samples: 473962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:01:01,319][00555] Avg episode reward: [(0, '5.223')] |
|
[2023-02-24 13:01:01,334][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000463_1896448.pth... |
|
[2023-02-24 13:01:01,484][12811] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000263_1077248.pth |
|
[2023-02-24 13:01:06,308][00555] Fps is (10 sec: 2457.6, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 1908736. Throughput: 0: 822.2. Samples: 477762. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:01:06,315][00555] Avg episode reward: [(0, '5.292')] |
|
[2023-02-24 13:01:10,178][12825] Updated weights for policy 0, policy_version 470 (0.0030) |
|
[2023-02-24 13:01:11,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 1929216. Throughput: 0: 842.4. Samples: 482832. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:01:11,311][00555] Avg episode reward: [(0, '5.832')] |
|
[2023-02-24 13:01:11,328][12811] Saving new best policy, reward=5.832! |
|
[2023-02-24 13:01:16,309][00555] Fps is (10 sec: 4095.7, 60 sec: 3345.3, 300 sec: 3387.9). Total num frames: 1949696. Throughput: 0: 838.7. Samples: 485910. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 13:01:16,312][00555] Avg episode reward: [(0, '6.060')] |
|
[2023-02-24 13:01:16,315][12811] Saving new best policy, reward=6.060! |
|
[2023-02-24 13:01:21,297][12825] Updated weights for policy 0, policy_version 480 (0.0018) |
|
[2023-02-24 13:01:21,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3401.8). Total num frames: 1966080. Throughput: 0: 819.7. Samples: 491322. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-24 13:01:21,314][00555] Avg episode reward: [(0, '6.285')] |
|
[2023-02-24 13:01:21,324][12811] Saving new best policy, reward=6.285! |
|
[2023-02-24 13:01:26,309][00555] Fps is (10 sec: 2867.3, 60 sec: 3345.0, 300 sec: 3401.8). Total num frames: 1978368. Throughput: 0: 817.5. Samples: 495146. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-24 13:01:26,315][00555] Avg episode reward: [(0, '6.285')] |
|
[2023-02-24 13:01:31,308][00555] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 3387.9). Total num frames: 1990656. Throughput: 0: 818.8. Samples: 497170. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:01:31,316][00555] Avg episode reward: [(0, '6.338')] |
|
[2023-02-24 13:01:31,331][12811] Saving new best policy, reward=6.338! |
|
[2023-02-24 13:01:34,557][12825] Updated weights for policy 0, policy_version 490 (0.0028) |
|
[2023-02-24 13:01:36,308][00555] Fps is (10 sec: 3276.9, 60 sec: 3208.5, 300 sec: 3387.9). Total num frames: 2011136. Throughput: 0: 832.6. Samples: 502974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:01:36,318][00555] Avg episode reward: [(0, '5.984')] |
|
[2023-02-24 13:01:41,312][00555] Fps is (10 sec: 4094.6, 60 sec: 3344.9, 300 sec: 3401.7). Total num frames: 2031616. Throughput: 0: 829.1. Samples: 508932. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:01:41,315][00555] Avg episode reward: [(0, '6.079')] |
|
[2023-02-24 13:01:46,315][00555] Fps is (10 sec: 3274.7, 60 sec: 3344.7, 300 sec: 3401.8). Total num frames: 2043904. Throughput: 0: 823.1. Samples: 511006. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 13:01:46,318][00555] Avg episode reward: [(0, '6.012')] |
|
[2023-02-24 13:01:46,530][12825] Updated weights for policy 0, policy_version 500 (0.0022) |
|
[2023-02-24 13:01:51,308][00555] Fps is (10 sec: 2868.2, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 2060288. Throughput: 0: 830.1. Samples: 515118. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:01:51,314][00555] Avg episode reward: [(0, '6.001')] |
|
[2023-02-24 13:01:56,308][00555] Fps is (10 sec: 3688.8, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 2080768. Throughput: 0: 854.4. Samples: 521282. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:01:56,311][00555] Avg episode reward: [(0, '6.165')] |
|
[2023-02-24 13:01:57,692][12825] Updated weights for policy 0, policy_version 510 (0.0016) |
|
[2023-02-24 13:02:01,314][00555] Fps is (10 sec: 4093.8, 60 sec: 3413.0, 300 sec: 3401.7). Total num frames: 2101248. Throughput: 0: 857.6. Samples: 524508. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:02:01,317][00555] Avg episode reward: [(0, '6.228')] |
|
[2023-02-24 13:02:06,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3401.8). Total num frames: 2113536. Throughput: 0: 837.3. Samples: 529002. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:02:06,311][00555] Avg episode reward: [(0, '6.181')] |
|
[2023-02-24 13:02:11,183][12825] Updated weights for policy 0, policy_version 520 (0.0022) |
|
[2023-02-24 13:02:11,308][00555] Fps is (10 sec: 2868.7, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 2129920. Throughput: 0: 844.3. Samples: 533138. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:02:11,314][00555] Avg episode reward: [(0, '6.135')] |
|
[2023-02-24 13:02:16,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 2150400. Throughput: 0: 866.9. Samples: 536180. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:02:16,317][00555] Avg episode reward: [(0, '5.795')] |
|
[2023-02-24 13:02:20,671][12825] Updated weights for policy 0, policy_version 530 (0.0019) |
|
[2023-02-24 13:02:21,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3415.7). Total num frames: 2170880. Throughput: 0: 885.2. Samples: 542810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:02:21,311][00555] Avg episode reward: [(0, '5.895')] |
|
[2023-02-24 13:02:26,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3415.7). Total num frames: 2187264. Throughput: 0: 853.4. Samples: 547332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:02:26,314][00555] Avg episode reward: [(0, '5.961')] |
|
[2023-02-24 13:02:31,309][00555] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3415.7). Total num frames: 2199552. Throughput: 0: 852.8. Samples: 549376. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:02:31,315][00555] Avg episode reward: [(0, '6.192')] |
|
[2023-02-24 13:02:34,247][12825] Updated weights for policy 0, policy_version 540 (0.0053) |
|
[2023-02-24 13:02:36,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 2220032. Throughput: 0: 873.9. Samples: 554442. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 13:02:36,315][00555] Avg episode reward: [(0, '6.252')] |
|
[2023-02-24 13:02:41,308][00555] Fps is (10 sec: 4096.1, 60 sec: 3481.8, 300 sec: 3415.6). Total num frames: 2240512. Throughput: 0: 883.3. Samples: 561030. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:02:41,316][00555] Avg episode reward: [(0, '6.423')] |
|
[2023-02-24 13:02:41,328][12811] Saving new best policy, reward=6.423! |
|
[2023-02-24 13:02:44,882][12825] Updated weights for policy 0, policy_version 550 (0.0014) |
|
[2023-02-24 13:02:46,312][00555] Fps is (10 sec: 3275.7, 60 sec: 3481.8, 300 sec: 3401.8). Total num frames: 2252800. Throughput: 0: 865.2. Samples: 563442. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 13:02:46,317][00555] Avg episode reward: [(0, '6.227')] |
|
[2023-02-24 13:02:51,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3415.7). Total num frames: 2269184. Throughput: 0: 858.7. Samples: 567642. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:02:51,315][00555] Avg episode reward: [(0, '6.288')] |
|
[2023-02-24 13:02:56,308][00555] Fps is (10 sec: 3687.7, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 2289664. Throughput: 0: 880.9. Samples: 572780. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 13:02:56,311][00555] Avg episode reward: [(0, '6.244')] |
|
[2023-02-24 13:02:57,303][12825] Updated weights for policy 0, policy_version 560 (0.0045) |
|
[2023-02-24 13:03:01,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.9, 300 sec: 3415.6). Total num frames: 2310144. Throughput: 0: 884.9. Samples: 576002. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:03:01,312][00555] Avg episode reward: [(0, '6.011')] |
|
[2023-02-24 13:03:01,329][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000564_2310144.pth... |
|
[2023-02-24 13:03:01,457][12811] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000362_1482752.pth |
|
[2023-02-24 13:03:06,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3401.8). Total num frames: 2322432. Throughput: 0: 860.2. Samples: 581520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:03:06,314][00555] Avg episode reward: [(0, '6.035')] |
|
[2023-02-24 13:03:09,264][12825] Updated weights for policy 0, policy_version 570 (0.0021) |
|
[2023-02-24 13:03:11,311][00555] Fps is (10 sec: 2866.5, 60 sec: 3481.5, 300 sec: 3415.6). Total num frames: 2338816. Throughput: 0: 851.9. Samples: 585668. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:03:11,317][00555] Avg episode reward: [(0, '6.269')] |
|
[2023-02-24 13:03:16,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 2355200. Throughput: 0: 849.5. Samples: 587604. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 13:03:16,312][00555] Avg episode reward: [(0, '6.440')] |
|
[2023-02-24 13:03:16,319][12811] Saving new best policy, reward=6.440! |
|
[2023-02-24 13:03:20,771][12825] Updated weights for policy 0, policy_version 580 (0.0028) |
|
[2023-02-24 13:03:21,308][00555] Fps is (10 sec: 3687.3, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 2375680. Throughput: 0: 879.1. Samples: 594002. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:03:21,310][00555] Avg episode reward: [(0, '7.217')] |
|
[2023-02-24 13:03:21,321][12811] Saving new best policy, reward=7.217! |
|
[2023-02-24 13:03:26,315][00555] Fps is (10 sec: 4093.4, 60 sec: 3481.2, 300 sec: 3429.5). Total num frames: 2396160. Throughput: 0: 857.3. Samples: 599614. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:03:26,332][00555] Avg episode reward: [(0, '7.184')] |
|
[2023-02-24 13:03:31,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 2408448. Throughput: 0: 850.0. Samples: 601690. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-24 13:03:31,315][00555] Avg episode reward: [(0, '7.070')] |
|
[2023-02-24 13:03:33,854][12825] Updated weights for policy 0, policy_version 590 (0.0031) |
|
[2023-02-24 13:03:36,313][00555] Fps is (10 sec: 2867.8, 60 sec: 3413.1, 300 sec: 3429.5). Total num frames: 2424832. Throughput: 0: 847.1. Samples: 605764. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:03:36,316][00555] Avg episode reward: [(0, '6.570')] |
|
[2023-02-24 13:03:41,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 2445312. Throughput: 0: 875.5. Samples: 612178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:03:41,311][00555] Avg episode reward: [(0, '6.823')] |
|
[2023-02-24 13:03:43,805][12825] Updated weights for policy 0, policy_version 600 (0.0019) |
|
[2023-02-24 13:03:46,308][00555] Fps is (10 sec: 4097.8, 60 sec: 3550.1, 300 sec: 3415.6). Total num frames: 2465792. Throughput: 0: 876.2. Samples: 615432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:03:46,316][00555] Avg episode reward: [(0, '7.104')] |
|
[2023-02-24 13:03:51,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 2478080. Throughput: 0: 855.3. Samples: 620008. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:03:51,316][00555] Avg episode reward: [(0, '7.537')] |
|
[2023-02-24 13:03:51,330][12811] Saving new best policy, reward=7.537! |
|
[2023-02-24 13:03:56,308][00555] Fps is (10 sec: 2457.6, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 2490368. Throughput: 0: 850.9. Samples: 623956. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:03:56,319][00555] Avg episode reward: [(0, '7.463')] |
|
[2023-02-24 13:03:57,459][12825] Updated weights for policy 0, policy_version 610 (0.0017) |
|
[2023-02-24 13:04:01,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3415.7). Total num frames: 2510848. Throughput: 0: 879.1. Samples: 627162. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:04:01,311][00555] Avg episode reward: [(0, '7.896')] |
|
[2023-02-24 13:04:01,346][12811] Saving new best policy, reward=7.896! |
|
[2023-02-24 13:04:06,310][00555] Fps is (10 sec: 4504.8, 60 sec: 3549.8, 300 sec: 3415.6). Total num frames: 2535424. Throughput: 0: 875.2. Samples: 633388. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:04:06,314][00555] Avg episode reward: [(0, '8.468')] |
|
[2023-02-24 13:04:06,320][12811] Saving new best policy, reward=8.468! |
|
[2023-02-24 13:04:07,781][12825] Updated weights for policy 0, policy_version 620 (0.0018) |
|
[2023-02-24 13:04:11,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3415.6). Total num frames: 2547712. Throughput: 0: 848.7. Samples: 637798. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-24 13:04:11,311][00555] Avg episode reward: [(0, '8.490')] |
|
[2023-02-24 13:04:11,327][12811] Saving new best policy, reward=8.490! |
|
[2023-02-24 13:04:16,311][00555] Fps is (10 sec: 2457.4, 60 sec: 3413.2, 300 sec: 3401.7). Total num frames: 2560000. Throughput: 0: 845.6. Samples: 639746. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:04:16,316][00555] Avg episode reward: [(0, '8.020')] |
|
[2023-02-24 13:04:20,783][12825] Updated weights for policy 0, policy_version 630 (0.0025) |
|
[2023-02-24 13:04:21,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3415.7). Total num frames: 2580480. Throughput: 0: 875.3. Samples: 645150. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 13:04:21,311][00555] Avg episode reward: [(0, '7.494')] |
|
[2023-02-24 13:04:26,309][00555] Fps is (10 sec: 4096.8, 60 sec: 3413.7, 300 sec: 3401.8). Total num frames: 2600960. Throughput: 0: 875.8. Samples: 651590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:04:26,320][00555] Avg episode reward: [(0, '7.870')] |
|
[2023-02-24 13:04:31,315][00555] Fps is (10 sec: 3684.0, 60 sec: 3481.2, 300 sec: 3415.6). Total num frames: 2617344. Throughput: 0: 855.4. Samples: 653930. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:04:31,322][00555] Avg episode reward: [(0, '8.008')] |
|
[2023-02-24 13:04:32,282][12825] Updated weights for policy 0, policy_version 640 (0.0019) |
|
[2023-02-24 13:04:36,309][00555] Fps is (10 sec: 2867.3, 60 sec: 3413.6, 300 sec: 3401.8). Total num frames: 2629632. Throughput: 0: 843.7. Samples: 657974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:04:36,314][00555] Avg episode reward: [(0, '8.751')] |
|
[2023-02-24 13:04:36,321][12811] Saving new best policy, reward=8.751! |
|
[2023-02-24 13:04:41,308][00555] Fps is (10 sec: 3278.9, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 2650112. Throughput: 0: 877.9. Samples: 663462. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:04:41,318][00555] Avg episode reward: [(0, '9.038')] |
|
[2023-02-24 13:04:41,334][12811] Saving new best policy, reward=9.038! |
|
[2023-02-24 13:04:43,745][12825] Updated weights for policy 0, policy_version 650 (0.0033) |
|
[2023-02-24 13:04:46,308][00555] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 2670592. Throughput: 0: 877.5. Samples: 666648. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:04:46,316][00555] Avg episode reward: [(0, '8.935')] |
|
[2023-02-24 13:04:51,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 2686976. Throughput: 0: 864.6. Samples: 672294. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:04:51,315][00555] Avg episode reward: [(0, '8.616')] |
|
[2023-02-24 13:04:56,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 2699264. Throughput: 0: 848.9. Samples: 675998. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:04:56,317][00555] Avg episode reward: [(0, '8.791')] |
|
[2023-02-24 13:04:56,730][12825] Updated weights for policy 0, policy_version 660 (0.0019) |
|
[2023-02-24 13:05:01,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 2715648. Throughput: 0: 851.8. Samples: 678076. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:05:01,311][00555] Avg episode reward: [(0, '8.696')] |
|
[2023-02-24 13:05:01,325][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000663_2715648.pth... |
|
[2023-02-24 13:05:01,486][12811] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000463_1896448.pth |
|
[2023-02-24 13:05:06,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3345.2, 300 sec: 3415.6). Total num frames: 2736128. Throughput: 0: 865.6. Samples: 684102. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:05:06,311][00555] Avg episode reward: [(0, '9.541')] |
|
[2023-02-24 13:05:06,319][12811] Saving new best policy, reward=9.541! |
|
[2023-02-24 13:05:07,785][12825] Updated weights for policy 0, policy_version 670 (0.0037) |
|
[2023-02-24 13:05:11,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3401.8). Total num frames: 2752512. Throughput: 0: 843.5. Samples: 689548. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-24 13:05:11,313][00555] Avg episode reward: [(0, '9.623')] |
|
[2023-02-24 13:05:11,328][12811] Saving new best policy, reward=9.623! |
|
[2023-02-24 13:05:16,317][00555] Fps is (10 sec: 2864.8, 60 sec: 3413.0, 300 sec: 3401.7). Total num frames: 2764800. Throughput: 0: 831.6. Samples: 691354. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:05:16,320][00555] Avg episode reward: [(0, '9.572')] |
|
[2023-02-24 13:05:21,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 2781184. Throughput: 0: 835.7. Samples: 695580. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:05:21,314][00555] Avg episode reward: [(0, '9.343')] |
|
[2023-02-24 13:05:21,628][12825] Updated weights for policy 0, policy_version 680 (0.0032) |
|
[2023-02-24 13:05:26,308][00555] Fps is (10 sec: 4099.5, 60 sec: 3413.4, 300 sec: 3415.6). Total num frames: 2805760. Throughput: 0: 855.6. Samples: 701964. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:05:26,311][00555] Avg episode reward: [(0, '9.351')] |
|
[2023-02-24 13:05:31,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3413.7, 300 sec: 3401.8). Total num frames: 2822144. Throughput: 0: 857.5. Samples: 705236. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:05:31,315][00555] Avg episode reward: [(0, '9.716')] |
|
[2023-02-24 13:05:31,331][12811] Saving new best policy, reward=9.716! |
|
[2023-02-24 13:05:31,665][12825] Updated weights for policy 0, policy_version 690 (0.0016) |
|
[2023-02-24 13:05:36,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3415.7). Total num frames: 2838528. Throughput: 0: 827.3. Samples: 709522. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:05:36,313][00555] Avg episode reward: [(0, '9.400')] |
|
[2023-02-24 13:05:41,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 2850816. Throughput: 0: 838.9. Samples: 713750. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:05:41,317][00555] Avg episode reward: [(0, '9.198')] |
|
[2023-02-24 13:05:44,539][12825] Updated weights for policy 0, policy_version 700 (0.0017) |
|
[2023-02-24 13:05:46,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 2871296. Throughput: 0: 866.3. Samples: 717058. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:05:46,323][00555] Avg episode reward: [(0, '8.758')] |
|
[2023-02-24 13:05:51,310][00555] Fps is (10 sec: 4505.0, 60 sec: 3481.5, 300 sec: 3429.5). Total num frames: 2895872. Throughput: 0: 878.2. Samples: 723624. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:05:51,316][00555] Avg episode reward: [(0, '9.595')] |
|
[2023-02-24 13:05:55,680][12825] Updated weights for policy 0, policy_version 710 (0.0022) |
|
[2023-02-24 13:05:56,312][00555] Fps is (10 sec: 3685.1, 60 sec: 3481.4, 300 sec: 3429.5). Total num frames: 2908160. Throughput: 0: 855.1. Samples: 728030. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:05:56,316][00555] Avg episode reward: [(0, '10.257')] |
|
[2023-02-24 13:05:56,321][12811] Saving new best policy, reward=10.257! |
|
[2023-02-24 13:06:01,308][00555] Fps is (10 sec: 2457.9, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 2920448. Throughput: 0: 860.2. Samples: 730058. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:06:01,317][00555] Avg episode reward: [(0, '10.389')] |
|
[2023-02-24 13:06:01,328][12811] Saving new best policy, reward=10.389! |
|
[2023-02-24 13:06:06,308][00555] Fps is (10 sec: 3277.9, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 2940928. Throughput: 0: 878.0. Samples: 735092. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:06:06,311][00555] Avg episode reward: [(0, '10.706')] |
|
[2023-02-24 13:06:06,317][12811] Saving new best policy, reward=10.706! |
|
[2023-02-24 13:06:07,851][12825] Updated weights for policy 0, policy_version 720 (0.0021) |
|
[2023-02-24 13:06:11,308][00555] Fps is (10 sec: 4096.1, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 2961408. Throughput: 0: 879.3. Samples: 741532. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:06:11,316][00555] Avg episode reward: [(0, '11.126')] |
|
[2023-02-24 13:06:11,327][12811] Saving new best policy, reward=11.126! |
|
[2023-02-24 13:06:16,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3550.4, 300 sec: 3429.5). Total num frames: 2977792. Throughput: 0: 860.8. Samples: 743974. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:06:16,311][00555] Avg episode reward: [(0, '11.143')] |
|
[2023-02-24 13:06:16,314][12811] Saving new best policy, reward=11.143! |
|
[2023-02-24 13:06:20,237][12825] Updated weights for policy 0, policy_version 730 (0.0023) |
|
[2023-02-24 13:06:21,309][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 2990080. Throughput: 0: 855.7. Samples: 748028. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:06:21,314][00555] Avg episode reward: [(0, '11.667')] |
|
[2023-02-24 13:06:21,336][12811] Saving new best policy, reward=11.667! |
|
[2023-02-24 13:06:26,316][00555] Fps is (10 sec: 3274.4, 60 sec: 3412.9, 300 sec: 3457.2). Total num frames: 3010560. Throughput: 0: 876.3. Samples: 753188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:06:26,323][00555] Avg episode reward: [(0, '12.835')] |
|
[2023-02-24 13:06:26,329][12811] Saving new best policy, reward=12.835! |
|
[2023-02-24 13:06:31,107][12825] Updated weights for policy 0, policy_version 740 (0.0032) |
|
[2023-02-24 13:06:31,309][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 3031040. Throughput: 0: 873.6. Samples: 756370. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:06:31,311][00555] Avg episode reward: [(0, '12.170')] |
|
[2023-02-24 13:06:36,309][00555] Fps is (10 sec: 3689.0, 60 sec: 3481.6, 300 sec: 3443.5). Total num frames: 3047424. Throughput: 0: 855.0. Samples: 762096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:06:36,317][00555] Avg episode reward: [(0, '11.333')] |
|
[2023-02-24 13:06:41,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3443.5). Total num frames: 3059712. Throughput: 0: 850.1. Samples: 766280. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:06:41,313][00555] Avg episode reward: [(0, '10.710')] |
|
[2023-02-24 13:06:44,549][12825] Updated weights for policy 0, policy_version 750 (0.0044) |
|
[2023-02-24 13:06:46,308][00555] Fps is (10 sec: 2867.3, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 3076096. Throughput: 0: 850.2. Samples: 768318. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:06:46,315][00555] Avg episode reward: [(0, '10.403')] |
|
[2023-02-24 13:06:51,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3457.3). Total num frames: 3100672. Throughput: 0: 882.5. Samples: 774806. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:06:51,311][00555] Avg episode reward: [(0, '10.988')] |
|
[2023-02-24 13:06:53,985][12825] Updated weights for policy 0, policy_version 760 (0.0024) |
|
[2023-02-24 13:06:56,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.8, 300 sec: 3443.5). Total num frames: 3117056. Throughput: 0: 866.0. Samples: 780504. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:06:56,311][00555] Avg episode reward: [(0, '11.936')] |
|
[2023-02-24 13:07:01,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 3129344. Throughput: 0: 858.1. Samples: 782590. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-24 13:07:01,316][00555] Avg episode reward: [(0, '12.689')] |
|
[2023-02-24 13:07:01,355][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000765_3133440.pth... |
|
[2023-02-24 13:07:01,570][12811] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000564_2310144.pth |
|
[2023-02-24 13:07:06,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 3145728. Throughput: 0: 857.4. Samples: 786610. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:07:06,311][00555] Avg episode reward: [(0, '13.680')] |
|
[2023-02-24 13:07:06,314][12811] Saving new best policy, reward=13.680! |
|
[2023-02-24 13:07:07,705][12825] Updated weights for policy 0, policy_version 770 (0.0019) |
|
[2023-02-24 13:07:11,309][00555] Fps is (10 sec: 3686.3, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 3166208. Throughput: 0: 884.5. Samples: 792986. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:07:11,317][00555] Avg episode reward: [(0, '13.529')] |
|
[2023-02-24 13:07:16,311][00555] Fps is (10 sec: 4095.0, 60 sec: 3481.5, 300 sec: 3443.4). Total num frames: 3186688. Throughput: 0: 885.5. Samples: 796220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:07:16,318][00555] Avg episode reward: [(0, '13.269')] |
|
[2023-02-24 13:07:17,863][12825] Updated weights for policy 0, policy_version 780 (0.0040) |
|
[2023-02-24 13:07:21,309][00555] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 3203072. Throughput: 0: 861.9. Samples: 800880. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 13:07:21,314][00555] Avg episode reward: [(0, '12.799')] |
|
[2023-02-24 13:07:26,308][00555] Fps is (10 sec: 2867.9, 60 sec: 3413.7, 300 sec: 3443.4). Total num frames: 3215360. Throughput: 0: 861.2. Samples: 805034. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:07:26,317][00555] Avg episode reward: [(0, '12.198')] |
|
[2023-02-24 13:07:30,463][12825] Updated weights for policy 0, policy_version 790 (0.0028) |
|
[2023-02-24 13:07:31,308][00555] Fps is (10 sec: 3276.9, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 3235840. Throughput: 0: 884.5. Samples: 808122. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:07:31,315][00555] Avg episode reward: [(0, '12.446')] |
|
[2023-02-24 13:07:36,308][00555] Fps is (10 sec: 4095.9, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 3256320. Throughput: 0: 882.2. Samples: 814506. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-24 13:07:36,311][00555] Avg episode reward: [(0, '13.888')] |
|
[2023-02-24 13:07:36,405][12811] Saving new best policy, reward=13.888! |
|
[2023-02-24 13:07:41,311][00555] Fps is (10 sec: 3685.4, 60 sec: 3549.7, 300 sec: 3457.3). Total num frames: 3272704. Throughput: 0: 854.2. Samples: 818946. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:07:41,314][00555] Avg episode reward: [(0, '14.507')] |
|
[2023-02-24 13:07:41,328][12811] Saving new best policy, reward=14.507! |
|
[2023-02-24 13:07:42,663][12825] Updated weights for policy 0, policy_version 800 (0.0021) |
|
[2023-02-24 13:07:46,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 3284992. Throughput: 0: 851.3. Samples: 820898. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:07:46,311][00555] Avg episode reward: [(0, '14.299')] |
|
[2023-02-24 13:07:51,308][00555] Fps is (10 sec: 3277.7, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 3305472. Throughput: 0: 880.0. Samples: 826212. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:07:51,317][00555] Avg episode reward: [(0, '14.553')] |
|
[2023-02-24 13:07:51,333][12811] Saving new best policy, reward=14.553! |
|
[2023-02-24 13:07:53,857][12825] Updated weights for policy 0, policy_version 810 (0.0023) |
|
[2023-02-24 13:07:56,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 3325952. Throughput: 0: 881.6. Samples: 832658. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:07:56,316][00555] Avg episode reward: [(0, '14.523')] |
|
[2023-02-24 13:08:01,309][00555] Fps is (10 sec: 3686.2, 60 sec: 3549.8, 300 sec: 3457.3). Total num frames: 3342336. Throughput: 0: 867.6. Samples: 835260. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:08:01,314][00555] Avg episode reward: [(0, '14.233')] |
|
[2023-02-24 13:08:06,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 3354624. Throughput: 0: 853.0. Samples: 839264. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:08:06,311][00555] Avg episode reward: [(0, '15.123')] |
|
[2023-02-24 13:08:06,314][12811] Saving new best policy, reward=15.123! |
|
[2023-02-24 13:08:06,700][12825] Updated weights for policy 0, policy_version 820 (0.0038) |
|
[2023-02-24 13:08:11,308][00555] Fps is (10 sec: 2867.3, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 3371008. Throughput: 0: 870.9. Samples: 844224. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:08:11,316][00555] Avg episode reward: [(0, '15.415')] |
|
[2023-02-24 13:08:11,327][12811] Saving new best policy, reward=15.415! |
|
[2023-02-24 13:08:16,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.7, 300 sec: 3457.3). Total num frames: 3395584. Throughput: 0: 871.9. Samples: 847356. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 13:08:16,314][00555] Avg episode reward: [(0, '16.739')] |
|
[2023-02-24 13:08:16,317][12811] Saving new best policy, reward=16.739! |
|
[2023-02-24 13:08:17,181][12825] Updated weights for policy 0, policy_version 830 (0.0019) |
|
[2023-02-24 13:08:21,310][00555] Fps is (10 sec: 4095.2, 60 sec: 3481.5, 300 sec: 3443.5). Total num frames: 3411968. Throughput: 0: 863.9. Samples: 853384. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:08:21,317][00555] Avg episode reward: [(0, '16.226')] |
|
[2023-02-24 13:08:26,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 3424256. Throughput: 0: 855.6. Samples: 857446. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:08:26,317][00555] Avg episode reward: [(0, '16.461')] |
|
[2023-02-24 13:08:30,759][12825] Updated weights for policy 0, policy_version 840 (0.0022) |
|
[2023-02-24 13:08:31,309][00555] Fps is (10 sec: 2867.7, 60 sec: 3413.3, 300 sec: 3443.5). Total num frames: 3440640. Throughput: 0: 858.7. Samples: 859538. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:08:31,318][00555] Avg episode reward: [(0, '15.770')] |
|
[2023-02-24 13:08:36,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 3461120. Throughput: 0: 874.9. Samples: 865582. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-24 13:08:36,317][00555] Avg episode reward: [(0, '16.863')] |
|
[2023-02-24 13:08:36,321][12811] Saving new best policy, reward=16.863! |
|
[2023-02-24 13:08:40,157][12825] Updated weights for policy 0, policy_version 850 (0.0016) |
|
[2023-02-24 13:08:41,308][00555] Fps is (10 sec: 4096.2, 60 sec: 3481.8, 300 sec: 3443.4). Total num frames: 3481600. Throughput: 0: 866.9. Samples: 871668. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:08:41,314][00555] Avg episode reward: [(0, '17.399')] |
|
[2023-02-24 13:08:41,337][12811] Saving new best policy, reward=17.399! |
|
[2023-02-24 13:08:46,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 3497984. Throughput: 0: 853.3. Samples: 873660. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 13:08:46,314][00555] Avg episode reward: [(0, '16.946')] |
|
[2023-02-24 13:08:51,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3510272. Throughput: 0: 857.0. Samples: 877830. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-24 13:08:51,312][00555] Avg episode reward: [(0, '16.412')] |
|
[2023-02-24 13:08:53,653][12825] Updated weights for policy 0, policy_version 860 (0.0015) |
|
[2023-02-24 13:08:56,309][00555] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3530752. Throughput: 0: 883.4. Samples: 883978. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:08:56,312][00555] Avg episode reward: [(0, '16.120')] |
|
[2023-02-24 13:09:01,309][00555] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 3555328. Throughput: 0: 885.8. Samples: 887218. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:09:01,314][00555] Avg episode reward: [(0, '16.557')] |
|
[2023-02-24 13:09:01,333][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000868_3555328.pth... |
|
[2023-02-24 13:09:01,511][12811] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000663_2715648.pth |
|
[2023-02-24 13:09:04,416][12825] Updated weights for policy 0, policy_version 870 (0.0016) |
|
[2023-02-24 13:09:06,308][00555] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 3567616. Throughput: 0: 857.1. Samples: 891954. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:09:06,313][00555] Avg episode reward: [(0, '17.150')] |
|
[2023-02-24 13:09:11,308][00555] Fps is (10 sec: 2457.6, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 3579904. Throughput: 0: 859.0. Samples: 896102. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:09:11,312][00555] Avg episode reward: [(0, '17.838')] |
|
[2023-02-24 13:09:11,322][12811] Saving new best policy, reward=17.838! |
|
[2023-02-24 13:09:16,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3600384. Throughput: 0: 876.3. Samples: 898970. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:09:16,314][00555] Avg episode reward: [(0, '20.210')] |
|
[2023-02-24 13:09:16,317][12811] Saving new best policy, reward=20.210! |
|
[2023-02-24 13:09:16,627][12825] Updated weights for policy 0, policy_version 880 (0.0028) |
|
[2023-02-24 13:09:21,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.7, 300 sec: 3457.3). Total num frames: 3620864. Throughput: 0: 886.8. Samples: 905486. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-24 13:09:21,312][00555] Avg episode reward: [(0, '21.979')] |
|
[2023-02-24 13:09:21,326][12811] Saving new best policy, reward=21.979! |
|
[2023-02-24 13:09:26,308][00555] Fps is (10 sec: 3686.3, 60 sec: 3549.9, 300 sec: 3457.4). Total num frames: 3637248. Throughput: 0: 853.0. Samples: 910052. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:09:26,311][00555] Avg episode reward: [(0, '21.710')] |
|
[2023-02-24 13:09:28,982][12825] Updated weights for policy 0, policy_version 890 (0.0017) |
|
[2023-02-24 13:09:31,308][00555] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 3649536. Throughput: 0: 853.6. Samples: 912072. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:09:31,311][00555] Avg episode reward: [(0, '21.414')] |
|
[2023-02-24 13:09:36,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 3670016. Throughput: 0: 871.2. Samples: 917032. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:09:36,311][00555] Avg episode reward: [(0, '19.233')] |
|
[2023-02-24 13:09:39,814][12825] Updated weights for policy 0, policy_version 900 (0.0013) |
|
[2023-02-24 13:09:41,308][00555] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 3690496. Throughput: 0: 881.1. Samples: 923628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:09:41,317][00555] Avg episode reward: [(0, '17.322')] |
|
[2023-02-24 13:09:46,308][00555] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 3706880. Throughput: 0: 871.1. Samples: 926418. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-24 13:09:46,316][00555] Avg episode reward: [(0, '16.919')] |
|
[2023-02-24 13:09:51,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 3723264. Throughput: 0: 858.3. Samples: 930578. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-24 13:09:51,314][00555] Avg episode reward: [(0, '16.995')] |
|
[2023-02-24 13:09:52,778][12825] Updated weights for policy 0, policy_version 910 (0.0017) |
|
[2023-02-24 13:09:56,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3739648. Throughput: 0: 878.0. Samples: 935614. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-24 13:09:56,317][00555] Avg episode reward: [(0, '17.768')] |
|
[2023-02-24 13:10:01,308][00555] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 3760128. Throughput: 0: 886.3. Samples: 938854. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:10:01,316][00555] Avg episode reward: [(0, '17.471')] |
|
[2023-02-24 13:10:02,958][12825] Updated weights for policy 0, policy_version 920 (0.0034) |
|
[2023-02-24 13:10:06,309][00555] Fps is (10 sec: 3686.1, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3776512. Throughput: 0: 872.5. Samples: 944748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:10:06,315][00555] Avg episode reward: [(0, '18.155')] |
|
[2023-02-24 13:10:11,314][00555] Fps is (10 sec: 3274.8, 60 sec: 3549.5, 300 sec: 3485.1). Total num frames: 3792896. Throughput: 0: 862.9. Samples: 948888. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:10:11,321][00555] Avg episode reward: [(0, '18.898')] |
|
[2023-02-24 13:10:16,308][00555] Fps is (10 sec: 2867.4, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 3805184. Throughput: 0: 863.3. Samples: 950920. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:10:16,314][00555] Avg episode reward: [(0, '20.834')] |
|
[2023-02-24 13:10:16,338][12825] Updated weights for policy 0, policy_version 930 (0.0023) |
|
[2023-02-24 13:10:21,308][00555] Fps is (10 sec: 3688.7, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3829760. Throughput: 0: 893.9. Samples: 957258. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:10:21,312][00555] Avg episode reward: [(0, '19.700')] |
|
[2023-02-24 13:10:26,219][12825] Updated weights for policy 0, policy_version 940 (0.0018) |
|
[2023-02-24 13:10:26,308][00555] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3850240. Throughput: 0: 879.4. Samples: 963202. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:10:26,319][00555] Avg episode reward: [(0, '19.703')] |
|
[2023-02-24 13:10:31,316][00555] Fps is (10 sec: 3274.4, 60 sec: 3549.4, 300 sec: 3471.1). Total num frames: 3862528. Throughput: 0: 862.2. Samples: 965222. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:10:31,319][00555] Avg episode reward: [(0, '19.465')] |
|
[2023-02-24 13:10:36,308][00555] Fps is (10 sec: 2457.6, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 3874816. Throughput: 0: 859.6. Samples: 969260. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-24 13:10:36,317][00555] Avg episode reward: [(0, '19.489')] |
|
[2023-02-24 13:10:39,213][12825] Updated weights for policy 0, policy_version 950 (0.0022) |
|
[2023-02-24 13:10:41,308][00555] Fps is (10 sec: 3689.1, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 3899392. Throughput: 0: 886.4. Samples: 975504. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:10:41,317][00555] Avg episode reward: [(0, '18.301')] |
|
[2023-02-24 13:10:46,308][00555] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 3919872. Throughput: 0: 886.1. Samples: 978730. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:10:46,311][00555] Avg episode reward: [(0, '19.088')] |
|
[2023-02-24 13:10:50,318][12825] Updated weights for policy 0, policy_version 960 (0.0019) |
|
[2023-02-24 13:10:51,308][00555] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3932160. Throughput: 0: 863.7. Samples: 983616. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:10:51,316][00555] Avg episode reward: [(0, '19.359')] |
|
[2023-02-24 13:10:56,308][00555] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 3948544. Throughput: 0: 864.7. Samples: 987796. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-24 13:10:56,311][00555] Avg episode reward: [(0, '20.635')] |
|
[2023-02-24 13:11:01,309][00555] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 3969024. Throughput: 0: 889.0. Samples: 990924. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-24 13:11:01,312][00555] Avg episode reward: [(0, '21.785')] |
|
[2023-02-24 13:11:01,330][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000969_3969024.pth... |
|
[2023-02-24 13:11:01,457][12811] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000765_3133440.pth |
|
[2023-02-24 13:11:02,030][12825] Updated weights for policy 0, policy_version 970 (0.0013) |
|
[2023-02-24 13:11:06,308][00555] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3989504. Throughput: 0: 889.8. Samples: 997298. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-24 13:11:06,313][00555] Avg episode reward: [(0, '21.723')] |
|
[2023-02-24 13:11:11,092][12811] Stopping Batcher_0... |
|
[2023-02-24 13:11:11,092][12811] Loop batcher_evt_loop terminating... |
|
[2023-02-24 13:11:11,093][00555] Component Batcher_0 stopped! |
|
[2023-02-24 13:11:11,097][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-02-24 13:11:11,177][00555] Component RolloutWorker_w3 stopped! |
|
[2023-02-24 13:11:11,179][12829] Stopping RolloutWorker_w3... |
|
[2023-02-24 13:11:11,185][00555] Component RolloutWorker_w5 stopped! |
|
[2023-02-24 13:11:11,188][12831] Stopping RolloutWorker_w5... |
|
[2023-02-24 13:11:11,193][12825] Weights refcount: 2 0 |
|
[2023-02-24 13:11:11,182][12829] Loop rollout_proc3_evt_loop terminating... |
|
[2023-02-24 13:11:11,189][12831] Loop rollout_proc5_evt_loop terminating... |
|
[2023-02-24 13:11:11,200][00555] Component RolloutWorker_w1 stopped! |
|
[2023-02-24 13:11:11,202][12827] Stopping RolloutWorker_w1... |
|
[2023-02-24 13:11:11,207][00555] Component RolloutWorker_w7 stopped! |
|
[2023-02-24 13:11:11,209][12832] Stopping RolloutWorker_w7... |
|
[2023-02-24 13:11:11,209][12827] Loop rollout_proc1_evt_loop terminating... |
|
[2023-02-24 13:11:11,220][12825] Stopping InferenceWorker_p0-w0... |
|
[2023-02-24 13:11:11,220][00555] Component InferenceWorker_p0-w0 stopped! |
|
[2023-02-24 13:11:11,211][12832] Loop rollout_proc7_evt_loop terminating... |
|
[2023-02-24 13:11:11,220][12825] Loop inference_proc0-0_evt_loop terminating... |
|
[2023-02-24 13:11:11,285][12826] Stopping RolloutWorker_w0... |
|
[2023-02-24 13:11:11,285][00555] Component RolloutWorker_w0 stopped! |
|
[2023-02-24 13:11:11,304][12828] Stopping RolloutWorker_w2... |
|
[2023-02-24 13:11:11,304][12828] Loop rollout_proc2_evt_loop terminating... |
|
[2023-02-24 13:11:11,306][12826] Loop rollout_proc0_evt_loop terminating... |
|
[2023-02-24 13:11:11,304][00555] Component RolloutWorker_w2 stopped! |
|
[2023-02-24 13:11:11,316][12833] Stopping RolloutWorker_w6... |
|
[2023-02-24 13:11:11,317][12833] Loop rollout_proc6_evt_loop terminating... |
|
[2023-02-24 13:11:11,316][00555] Component RolloutWorker_w6 stopped! |
|
[2023-02-24 13:11:11,350][12830] Stopping RolloutWorker_w4... |
|
[2023-02-24 13:11:11,350][00555] Component RolloutWorker_w4 stopped! |
|
[2023-02-24 13:11:11,351][12830] Loop rollout_proc4_evt_loop terminating... |
|
[2023-02-24 13:11:11,380][12811] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000868_3555328.pth |
|
[2023-02-24 13:11:11,397][12811] Saving new best policy, reward=22.466! |
|
[2023-02-24 13:11:11,668][12811] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-02-24 13:11:11,974][12811] Stopping LearnerWorker_p0... |
|
[2023-02-24 13:11:11,974][12811] Loop learner_proc0_evt_loop terminating... |
|
[2023-02-24 13:11:11,978][00555] Component LearnerWorker_p0 stopped! |
|
[2023-02-24 13:11:11,988][00555] Waiting for process learner_proc0 to stop... |
|
[2023-02-24 13:11:14,176][00555] Waiting for process inference_proc0-0 to join... |
|
[2023-02-24 13:11:14,605][00555] Waiting for process rollout_proc0 to join... |
|
[2023-02-24 13:11:15,404][00555] Waiting for process rollout_proc1 to join... |
|
[2023-02-24 13:11:15,406][00555] Waiting for process rollout_proc2 to join... |
|
[2023-02-24 13:11:15,407][00555] Waiting for process rollout_proc3 to join... |
|
[2023-02-24 13:11:15,409][00555] Waiting for process rollout_proc4 to join... |
|
[2023-02-24 13:11:15,410][00555] Waiting for process rollout_proc5 to join... |
|
[2023-02-24 13:11:15,412][00555] Waiting for process rollout_proc6 to join... |
|
[2023-02-24 13:11:15,414][00555] Waiting for process rollout_proc7 to join... |
|
[2023-02-24 13:11:15,415][00555] Batcher 0 profile tree view: |
|
batching: 27.1628, releasing_batches: 0.0260 |
|
[2023-02-24 13:11:15,417][00555] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 558.8544 |
|
update_model: 8.6133 |
|
weight_update: 0.0013 |
|
one_step: 0.0096 |
|
handle_policy_step: 568.1463 |
|
deserialize: 16.0533, stack: 3.1144, obs_to_device_normalize: 120.8237, forward: 280.8360, send_messages: 27.4523 |
|
prepare_outputs: 91.6313 |
|
to_cpu: 57.9282 |
|
[2023-02-24 13:11:15,418][00555] Learner 0 profile tree view: |
|
misc: 0.0066, prepare_batch: 16.5349 |
|
train: 78.2531 |
|
epoch_init: 0.0106, minibatch_init: 0.0066, losses_postprocess: 0.6829, kl_divergence: 0.6574, after_optimizer: 32.8680 |
|
calculate_losses: 27.9283 |
|
losses_init: 0.0060, forward_head: 1.8012, bptt_initial: 18.2903, tail: 1.3498, advantages_returns: 0.3073, losses: 3.3432 |
|
bptt: 2.5243 |
|
bptt_forward_core: 2.4403 |
|
update: 15.4441 |
|
clip: 1.5340 |
|
[2023-02-24 13:11:15,419][00555] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.2912, enqueue_policy_requests: 157.4487, env_step: 884.6014, overhead: 23.9213, complete_rollouts: 7.0940 |
|
save_policy_outputs: 22.7150 |
|
split_output_tensors: 10.8235 |
|
[2023-02-24 13:11:15,421][00555] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.3376, enqueue_policy_requests: 158.1552, env_step: 882.4598, overhead: 23.6843, complete_rollouts: 7.9125 |
|
save_policy_outputs: 22.6479 |
|
split_output_tensors: 11.0344 |
|
[2023-02-24 13:11:15,422][00555] Loop Runner_EvtLoop terminating... |
|
[2023-02-24 13:11:15,424][00555] Runner profile tree view: |
|
main_loop: 1208.6949 |
|
[2023-02-24 13:11:15,425][00555] Collected {0: 4005888}, FPS: 3314.2 |
|
[2023-02-24 13:12:15,225][00555] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2023-02-24 13:12:15,227][00555] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2023-02-24 13:12:15,231][00555] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2023-02-24 13:12:15,234][00555] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2023-02-24 13:12:15,238][00555] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-02-24 13:12:15,241][00555] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2023-02-24 13:12:15,245][00555] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-02-24 13:12:15,248][00555] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2023-02-24 13:12:15,250][00555] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2023-02-24 13:12:15,251][00555] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2023-02-24 13:12:15,253][00555] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2023-02-24 13:12:15,255][00555] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2023-02-24 13:12:15,258][00555] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2023-02-24 13:12:15,260][00555] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2023-02-24 13:12:15,262][00555] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2023-02-24 13:12:15,290][00555] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-24 13:12:15,293][00555] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-02-24 13:12:15,297][00555] RunningMeanStd input shape: (1,) |
|
[2023-02-24 13:12:15,324][00555] ConvEncoder: input_channels=3 |
|
[2023-02-24 13:12:16,193][00555] Conv encoder output size: 512 |
|
[2023-02-24 13:12:16,196][00555] Policy head output size: 512 |
|
[2023-02-24 13:12:19,493][00555] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-02-24 13:12:20,848][00555] Num frames 100... |
|
[2023-02-24 13:12:20,974][00555] Num frames 200... |
|
[2023-02-24 13:12:21,096][00555] Num frames 300... |
|
[2023-02-24 13:12:21,216][00555] Num frames 400... |
|
[2023-02-24 13:12:21,338][00555] Num frames 500... |
|
[2023-02-24 13:12:21,458][00555] Num frames 600... |
|
[2023-02-24 13:12:21,586][00555] Num frames 700... |
|
[2023-02-24 13:12:21,715][00555] Num frames 800... |
|
[2023-02-24 13:12:21,847][00555] Num frames 900... |
|
[2023-02-24 13:12:21,977][00555] Num frames 1000... |
|
[2023-02-24 13:12:22,097][00555] Num frames 1100... |
|
[2023-02-24 13:12:22,220][00555] Num frames 1200... |
|
[2023-02-24 13:12:22,330][00555] Avg episode rewards: #0: 28.470, true rewards: #0: 12.470 |
|
[2023-02-24 13:12:22,332][00555] Avg episode reward: 28.470, avg true_objective: 12.470 |
|
[2023-02-24 13:12:22,407][00555] Num frames 1300... |
|
[2023-02-24 13:12:22,538][00555] Num frames 1400... |
|
[2023-02-24 13:12:22,654][00555] Num frames 1500... |
|
[2023-02-24 13:12:22,788][00555] Num frames 1600... |
|
[2023-02-24 13:12:22,907][00555] Num frames 1700... |
|
[2023-02-24 13:12:23,025][00555] Num frames 1800... |
|
[2023-02-24 13:12:23,150][00555] Num frames 1900... |
|
[2023-02-24 13:12:23,266][00555] Num frames 2000... |
|
[2023-02-24 13:12:23,386][00555] Num frames 2100... |
|
[2023-02-24 13:12:23,493][00555] Avg episode rewards: #0: 24.215, true rewards: #0: 10.715 |
|
[2023-02-24 13:12:23,495][00555] Avg episode reward: 24.215, avg true_objective: 10.715 |
|
[2023-02-24 13:12:23,568][00555] Num frames 2200... |
|
[2023-02-24 13:12:23,688][00555] Num frames 2300... |
|
[2023-02-24 13:12:23,817][00555] Num frames 2400... |
|
[2023-02-24 13:12:23,947][00555] Num frames 2500... |
|
[2023-02-24 13:12:24,066][00555] Num frames 2600... |
|
[2023-02-24 13:12:24,220][00555] Avg episode rewards: #0: 19.290, true rewards: #0: 8.957 |
|
[2023-02-24 13:12:24,221][00555] Avg episode reward: 19.290, avg true_objective: 8.957 |
|
[2023-02-24 13:12:24,242][00555] Num frames 2700... |
|
[2023-02-24 13:12:24,365][00555] Num frames 2800... |
|
[2023-02-24 13:12:24,484][00555] Num frames 2900... |
|
[2023-02-24 13:12:24,601][00555] Num frames 3000... |
|
[2023-02-24 13:12:24,725][00555] Num frames 3100... |
|
[2023-02-24 13:12:24,848][00555] Num frames 3200... |
|
[2023-02-24 13:12:24,965][00555] Num frames 3300... |
|
[2023-02-24 13:12:25,135][00555] Avg episode rewards: #0: 17.228, true rewards: #0: 8.477 |
|
[2023-02-24 13:12:25,138][00555] Avg episode reward: 17.228, avg true_objective: 8.477 |
|
[2023-02-24 13:12:25,151][00555] Num frames 3400... |
|
[2023-02-24 13:12:25,272][00555] Num frames 3500... |
|
[2023-02-24 13:12:25,399][00555] Num frames 3600... |
|
[2023-02-24 13:12:25,527][00555] Num frames 3700... |
|
[2023-02-24 13:12:25,653][00555] Num frames 3800... |
|
[2023-02-24 13:12:25,785][00555] Num frames 3900... |
|
[2023-02-24 13:12:25,940][00555] Avg episode rewards: #0: 16.372, true rewards: #0: 7.972 |
|
[2023-02-24 13:12:25,943][00555] Avg episode reward: 16.372, avg true_objective: 7.972 |
|
[2023-02-24 13:12:25,965][00555] Num frames 4000... |
|
[2023-02-24 13:12:26,084][00555] Num frames 4100... |
|
[2023-02-24 13:12:26,200][00555] Num frames 4200... |
|
[2023-02-24 13:12:26,328][00555] Num frames 4300... |
|
[2023-02-24 13:12:26,430][00555] Avg episode rewards: #0: 14.397, true rewards: #0: 7.230 |
|
[2023-02-24 13:12:26,432][00555] Avg episode reward: 14.397, avg true_objective: 7.230 |
|
[2023-02-24 13:12:26,514][00555] Num frames 4400... |
|
[2023-02-24 13:12:26,648][00555] Num frames 4500... |
|
[2023-02-24 13:12:26,784][00555] Num frames 4600... |
|
[2023-02-24 13:12:26,913][00555] Num frames 4700... |
|
[2023-02-24 13:12:27,042][00555] Num frames 4800... |
|
[2023-02-24 13:12:27,208][00555] Avg episode rewards: #0: 13.403, true rewards: #0: 6.974 |
|
[2023-02-24 13:12:27,211][00555] Avg episode reward: 13.403, avg true_objective: 6.974 |
|
[2023-02-24 13:12:27,238][00555] Num frames 4900... |
|
[2023-02-24 13:12:27,365][00555] Num frames 5000... |
|
[2023-02-24 13:12:27,495][00555] Num frames 5100... |
|
[2023-02-24 13:12:27,613][00555] Num frames 5200... |
|
[2023-02-24 13:12:27,737][00555] Num frames 5300... |
|
[2023-02-24 13:12:27,861][00555] Num frames 5400... |
|
[2023-02-24 13:12:27,978][00555] Num frames 5500... |
|
[2023-02-24 13:12:28,064][00555] Avg episode rewards: #0: 13.158, true rewards: #0: 6.907 |
|
[2023-02-24 13:12:28,066][00555] Avg episode reward: 13.158, avg true_objective: 6.907 |
|
[2023-02-24 13:12:28,155][00555] Num frames 5600... |
|
[2023-02-24 13:12:28,271][00555] Num frames 5700... |
|
[2023-02-24 13:12:28,392][00555] Num frames 5800... |
|
[2023-02-24 13:12:28,515][00555] Num frames 5900... |
|
[2023-02-24 13:12:28,634][00555] Num frames 6000... |
|
[2023-02-24 13:12:28,760][00555] Num frames 6100... |
|
[2023-02-24 13:12:28,899][00555] Num frames 6200... |
|
[2023-02-24 13:12:29,020][00555] Num frames 6300... |
|
[2023-02-24 13:12:29,140][00555] Num frames 6400... |
|
[2023-02-24 13:12:29,273][00555] Num frames 6500... |
|
[2023-02-24 13:12:29,446][00555] Num frames 6600... |
|
[2023-02-24 13:12:29,617][00555] Num frames 6700... |
|
[2023-02-24 13:12:29,781][00555] Num frames 6800... |
|
[2023-02-24 13:12:29,959][00555] Num frames 6900... |
|
[2023-02-24 13:12:30,138][00555] Avg episode rewards: #0: 15.189, true rewards: #0: 7.744 |
|
[2023-02-24 13:12:30,145][00555] Avg episode reward: 15.189, avg true_objective: 7.744 |
|
[2023-02-24 13:12:30,198][00555] Num frames 7000... |
|
[2023-02-24 13:12:30,363][00555] Num frames 7100... |
|
[2023-02-24 13:12:30,532][00555] Num frames 7200... |
|
[2023-02-24 13:12:30,710][00555] Num frames 7300... |
|
[2023-02-24 13:12:30,885][00555] Num frames 7400... |
|
[2023-02-24 13:12:31,068][00555] Num frames 7500... |
|
[2023-02-24 13:12:31,258][00555] Avg episode rewards: #0: 14.578, true rewards: #0: 7.578 |
|
[2023-02-24 13:12:31,260][00555] Avg episode reward: 14.578, avg true_objective: 7.578 |
|
[2023-02-24 13:13:21,076][00555] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2023-02-24 13:16:35,717][00555] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2023-02-24 13:16:35,725][00555] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2023-02-24 13:16:35,727][00555] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2023-02-24 13:16:35,729][00555] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2023-02-24 13:16:35,731][00555] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-02-24 13:16:35,734][00555] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2023-02-24 13:16:35,738][00555] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2023-02-24 13:16:35,740][00555] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2023-02-24 13:16:35,741][00555] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2023-02-24 13:16:35,743][00555] Adding new argument 'hf_repository'='zbenmo/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2023-02-24 13:16:35,745][00555] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2023-02-24 13:16:35,746][00555] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2023-02-24 13:16:35,748][00555] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2023-02-24 13:16:35,749][00555] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2023-02-24 13:16:35,750][00555] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2023-02-24 13:16:35,784][00555] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-02-24 13:16:35,787][00555] RunningMeanStd input shape: (1,) |
|
[2023-02-24 13:16:35,807][00555] ConvEncoder: input_channels=3 |
|
[2023-02-24 13:16:35,867][00555] Conv encoder output size: 512 |
|
[2023-02-24 13:16:35,869][00555] Policy head output size: 512 |
|
[2023-02-24 13:16:35,899][00555] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-02-24 13:16:36,547][00555] Num frames 100... |
|
[2023-02-24 13:16:36,705][00555] Num frames 200... |
|
[2023-02-24 13:16:36,867][00555] Num frames 300... |
|
[2023-02-24 13:16:37,042][00555] Num frames 400... |
|
[2023-02-24 13:16:37,121][00555] Avg episode rewards: #0: 6.100, true rewards: #0: 4.100 |
|
[2023-02-24 13:16:37,123][00555] Avg episode reward: 6.100, avg true_objective: 4.100 |
|
[2023-02-24 13:16:37,291][00555] Num frames 500... |
|
[2023-02-24 13:16:37,461][00555] Num frames 600... |
|
[2023-02-24 13:16:37,636][00555] Num frames 700... |
|
[2023-02-24 13:16:37,812][00555] Num frames 800... |
|
[2023-02-24 13:16:37,975][00555] Num frames 900... |
|
[2023-02-24 13:16:38,138][00555] Num frames 1000... |
|
[2023-02-24 13:16:38,306][00555] Num frames 1100... |
|
[2023-02-24 13:16:38,394][00555] Avg episode rewards: #0: 10.585, true rewards: #0: 5.585 |
|
[2023-02-24 13:16:38,397][00555] Avg episode reward: 10.585, avg true_objective: 5.585 |
|
[2023-02-24 13:16:38,544][00555] Num frames 1200... |
|
[2023-02-24 13:16:38,700][00555] Num frames 1300... |
|
[2023-02-24 13:16:38,819][00555] Num frames 1400... |
|
[2023-02-24 13:16:38,938][00555] Num frames 1500... |
|
[2023-02-24 13:16:39,056][00555] Num frames 1600... |
|
[2023-02-24 13:16:39,176][00555] Num frames 1700... |
|
[2023-02-24 13:16:39,300][00555] Num frames 1800... |
|
[2023-02-24 13:16:39,421][00555] Num frames 1900... |
|
[2023-02-24 13:16:39,558][00555] Num frames 2000... |
|
[2023-02-24 13:16:39,682][00555] Num frames 2100... |
|
[2023-02-24 13:16:39,752][00555] Avg episode rewards: #0: 13.363, true rewards: #0: 7.030 |
|
[2023-02-24 13:16:39,753][00555] Avg episode reward: 13.363, avg true_objective: 7.030 |
|
[2023-02-24 13:16:39,868][00555] Num frames 2200... |
|
[2023-02-24 13:16:39,986][00555] Num frames 2300... |
|
[2023-02-24 13:16:40,100][00555] Num frames 2400... |
|
[2023-02-24 13:16:40,227][00555] Num frames 2500... |
|
[2023-02-24 13:16:40,340][00555] Num frames 2600... |
|
[2023-02-24 13:16:40,462][00555] Num frames 2700... |
|
[2023-02-24 13:16:40,586][00555] Num frames 2800... |
|
[2023-02-24 13:16:40,711][00555] Num frames 2900... |
|
[2023-02-24 13:16:40,843][00555] Num frames 3000... |
|
[2023-02-24 13:16:40,973][00555] Num frames 3100... |
|
[2023-02-24 13:16:41,095][00555] Num frames 3200... |
|
[2023-02-24 13:16:41,213][00555] Num frames 3300... |
|
[2023-02-24 13:16:41,339][00555] Num frames 3400... |
|
[2023-02-24 13:16:41,456][00555] Num frames 3500... |
|
[2023-02-24 13:16:41,578][00555] Num frames 3600... |
|
[2023-02-24 13:16:41,706][00555] Num frames 3700... |
|
[2023-02-24 13:16:41,829][00555] Num frames 3800... |
|
[2023-02-24 13:16:41,948][00555] Num frames 3900... |
|
[2023-02-24 13:16:42,007][00555] Avg episode rewards: #0: 20.502, true rewards: #0: 9.752 |
|
[2023-02-24 13:16:42,009][00555] Avg episode reward: 20.502, avg true_objective: 9.752 |
|
[2023-02-24 13:16:42,128][00555] Num frames 4000... |
|
[2023-02-24 13:16:42,242][00555] Avg episode rewards: #0: 16.704, true rewards: #0: 8.104 |
|
[2023-02-24 13:16:42,247][00555] Avg episode reward: 16.704, avg true_objective: 8.104 |
|
[2023-02-24 13:16:42,311][00555] Num frames 4100... |
|
[2023-02-24 13:16:42,433][00555] Num frames 4200... |
|
[2023-02-24 13:16:42,555][00555] Num frames 4300... |
|
[2023-02-24 13:16:42,682][00555] Num frames 4400... |
|
[2023-02-24 13:16:42,801][00555] Num frames 4500... |
|
[2023-02-24 13:16:42,952][00555] Avg episode rewards: #0: 15.808, true rewards: #0: 7.642 |
|
[2023-02-24 13:16:42,953][00555] Avg episode reward: 15.808, avg true_objective: 7.642 |
|
[2023-02-24 13:16:42,975][00555] Num frames 4600... |
|
[2023-02-24 13:16:43,091][00555] Num frames 4700... |
|
[2023-02-24 13:16:43,208][00555] Num frames 4800... |
|
[2023-02-24 13:16:43,332][00555] Num frames 4900... |
|
[2023-02-24 13:16:43,452][00555] Num frames 5000... |
|
[2023-02-24 13:16:43,571][00555] Num frames 5100... |
|
[2023-02-24 13:16:43,692][00555] Num frames 5200... |
|
[2023-02-24 13:16:43,808][00555] Num frames 5300... |
|
[2023-02-24 13:16:43,925][00555] Num frames 5400... |
|
[2023-02-24 13:16:44,047][00555] Num frames 5500... |
|
[2023-02-24 13:16:44,171][00555] Num frames 5600... |
|
[2023-02-24 13:16:44,295][00555] Num frames 5700... |
|
[2023-02-24 13:16:44,436][00555] Avg episode rewards: #0: 17.099, true rewards: #0: 8.241 |
|
[2023-02-24 13:16:44,438][00555] Avg episode reward: 17.099, avg true_objective: 8.241 |
|
[2023-02-24 13:16:44,479][00555] Num frames 5800... |
|
[2023-02-24 13:16:44,598][00555] Num frames 5900... |
|
[2023-02-24 13:16:44,712][00555] Num frames 6000... |
|
[2023-02-24 13:16:44,830][00555] Num frames 6100... |
|
[2023-02-24 13:16:44,955][00555] Num frames 6200... |
|
[2023-02-24 13:16:45,033][00555] Avg episode rewards: #0: 15.646, true rewards: #0: 7.771 |
|
[2023-02-24 13:16:45,036][00555] Avg episode reward: 15.646, avg true_objective: 7.771 |
|
[2023-02-24 13:16:45,137][00555] Num frames 6300... |
|
[2023-02-24 13:16:45,258][00555] Num frames 6400... |
|
[2023-02-24 13:16:45,382][00555] Num frames 6500... |
|
[2023-02-24 13:16:45,526][00555] Avg episode rewards: #0: 14.523, true rewards: #0: 7.301 |
|
[2023-02-24 13:16:45,529][00555] Avg episode reward: 14.523, avg true_objective: 7.301 |
|
[2023-02-24 13:16:45,569][00555] Num frames 6600... |
|
[2023-02-24 13:16:45,686][00555] Num frames 6700... |
|
[2023-02-24 13:16:45,806][00555] Num frames 6800... |
|
[2023-02-24 13:16:45,921][00555] Num frames 6900... |
|
[2023-02-24 13:16:46,035][00555] Num frames 7000... |
|
[2023-02-24 13:16:46,155][00555] Num frames 7100... |
|
[2023-02-24 13:16:46,301][00555] Avg episode rewards: #0: 14.079, true rewards: #0: 7.179 |
|
[2023-02-24 13:16:46,304][00555] Avg episode reward: 14.079, avg true_objective: 7.179 |
|
[2023-02-24 13:17:33,289][00555] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|