[2025-02-13 15:37:02,172][00196] Saving configuration to /content/train_dir/default_experiment/config.json... [2025-02-13 15:37:02,177][00196] Rollout worker 0 uses device cpu [2025-02-13 15:37:02,179][00196] Rollout worker 1 uses device cpu [2025-02-13 15:37:02,182][00196] Rollout worker 2 uses device cpu [2025-02-13 15:37:02,184][00196] Rollout worker 3 uses device cpu [2025-02-13 15:37:02,186][00196] Rollout worker 4 uses device cpu [2025-02-13 15:37:02,187][00196] Rollout worker 5 uses device cpu [2025-02-13 15:37:02,189][00196] Rollout worker 6 uses device cpu [2025-02-13 15:37:02,191][00196] Rollout worker 7 uses device cpu [2025-02-13 15:37:15,059][00196] Environment doom_basic already registered, overwriting... [2025-02-13 15:37:15,064][00196] Environment doom_two_colors_easy already registered, overwriting... [2025-02-13 15:37:15,069][00196] Environment doom_two_colors_hard already registered, overwriting... [2025-02-13 15:37:15,071][00196] Environment doom_dm already registered, overwriting... [2025-02-13 15:37:15,080][00196] Environment doom_dwango5 already registered, overwriting... [2025-02-13 15:37:15,082][00196] Environment doom_my_way_home_flat_actions already registered, overwriting... [2025-02-13 15:37:15,083][00196] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2025-02-13 15:37:15,087][00196] Environment doom_my_way_home already registered, overwriting... [2025-02-13 15:37:15,095][00196] Environment doom_deadly_corridor already registered, overwriting... [2025-02-13 15:37:15,099][00196] Environment doom_defend_the_center already registered, overwriting... [2025-02-13 15:37:15,100][00196] Environment doom_defend_the_line already registered, overwriting... [2025-02-13 15:37:15,103][00196] Environment doom_health_gathering already registered, overwriting... [2025-02-13 15:37:15,114][00196] Environment doom_health_gathering_supreme already registered, overwriting... [2025-02-13 15:37:15,130][00196] Environment doom_battle already registered, overwriting... [2025-02-13 15:37:15,136][00196] Environment doom_battle2 already registered, overwriting... [2025-02-13 15:37:15,137][00196] Environment doom_duel_bots already registered, overwriting... [2025-02-13 15:37:15,139][00196] Environment doom_deathmatch_bots already registered, overwriting... [2025-02-13 15:37:15,142][00196] Environment doom_duel already registered, overwriting... [2025-02-13 15:37:15,148][00196] Environment doom_deathmatch_full already registered, overwriting... [2025-02-13 15:37:15,155][00196] Environment doom_benchmark already registered, overwriting... [2025-02-13 15:37:15,159][00196] register_encoder_factory: [2025-02-13 15:37:15,211][00196] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2025-02-13 15:37:15,293][00196] Experiment dir /content/train_dir/default_experiment already exists! [2025-02-13 15:37:15,299][00196] Resuming existing experiment from /content/train_dir/default_experiment... [2025-02-13 15:37:15,341][00196] Weights and Biases integration disabled [2025-02-13 15:37:15,372][00196] Environment var CUDA_VISIBLE_DEVICES is [2025-02-13 15:37:21,048][00196] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=gpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=8 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 lr_adaptive_min=1e-06 lr_adaptive_max=0.01 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=6000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=6000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 6000000} git_hash=unknown git_repo_name=not a git repository [2025-02-13 15:37:21,052][00196] Saving configuration to /content/train_dir/default_experiment/config.json... [2025-02-13 15:37:21,057][00196] Rollout worker 0 uses device cpu [2025-02-13 15:37:21,059][00196] Rollout worker 1 uses device cpu [2025-02-13 15:37:21,061][00196] Rollout worker 2 uses device cpu [2025-02-13 15:37:21,062][00196] Rollout worker 3 uses device cpu [2025-02-13 15:37:21,064][00196] Rollout worker 4 uses device cpu [2025-02-13 15:37:21,065][00196] Rollout worker 5 uses device cpu [2025-02-13 15:37:21,066][00196] Rollout worker 6 uses device cpu [2025-02-13 15:37:21,068][00196] Rollout worker 7 uses device cpu [2025-02-13 15:48:42,168][00196] Environment doom_basic already registered, overwriting... [2025-02-13 15:48:42,173][00196] Environment doom_two_colors_easy already registered, overwriting... [2025-02-13 15:48:42,176][00196] Environment doom_two_colors_hard already registered, overwriting... [2025-02-13 15:48:42,178][00196] Environment doom_dm already registered, overwriting... [2025-02-13 15:48:42,180][00196] Environment doom_dwango5 already registered, overwriting... [2025-02-13 15:48:42,182][00196] Environment doom_my_way_home_flat_actions already registered, overwriting... [2025-02-13 15:48:42,183][00196] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2025-02-13 15:48:42,184][00196] Environment doom_my_way_home already registered, overwriting... [2025-02-13 15:48:42,186][00196] Environment doom_deadly_corridor already registered, overwriting... [2025-02-13 15:48:42,187][00196] Environment doom_defend_the_center already registered, overwriting... [2025-02-13 15:48:42,190][00196] Environment doom_defend_the_line already registered, overwriting... [2025-02-13 15:48:42,193][00196] Environment doom_health_gathering already registered, overwriting... [2025-02-13 15:48:42,195][00196] Environment doom_health_gathering_supreme already registered, overwriting... [2025-02-13 15:48:42,199][00196] Environment doom_battle already registered, overwriting... [2025-02-13 15:48:42,204][00196] Environment doom_battle2 already registered, overwriting... [2025-02-13 15:48:42,206][00196] Environment doom_duel_bots already registered, overwriting... [2025-02-13 15:48:42,210][00196] Environment doom_deathmatch_bots already registered, overwriting... [2025-02-13 15:48:42,213][00196] Environment doom_duel already registered, overwriting... [2025-02-13 15:48:42,216][00196] Environment doom_deathmatch_full already registered, overwriting... [2025-02-13 15:48:42,218][00196] Environment doom_benchmark already registered, overwriting... [2025-02-13 15:48:42,221][00196] register_encoder_factory: [2025-02-13 15:49:50,469][00196] Environment doom_basic already registered, overwriting... [2025-02-13 15:49:50,472][00196] Environment doom_two_colors_easy already registered, overwriting... [2025-02-13 15:49:50,474][00196] Environment doom_two_colors_hard already registered, overwriting... [2025-02-13 15:49:50,476][00196] Environment doom_dm already registered, overwriting... [2025-02-13 15:49:50,479][00196] Environment doom_dwango5 already registered, overwriting... [2025-02-13 15:49:50,484][00196] Environment doom_my_way_home_flat_actions already registered, overwriting... [2025-02-13 15:49:50,487][00196] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2025-02-13 15:49:50,489][00196] Environment doom_my_way_home already registered, overwriting... [2025-02-13 15:49:50,492][00196] Environment doom_deadly_corridor already registered, overwriting... [2025-02-13 15:49:50,496][00196] Environment doom_defend_the_center already registered, overwriting... [2025-02-13 15:49:50,498][00196] Environment doom_defend_the_line already registered, overwriting... [2025-02-13 15:49:50,500][00196] Environment doom_health_gathering already registered, overwriting... [2025-02-13 15:49:50,503][00196] Environment doom_health_gathering_supreme already registered, overwriting... [2025-02-13 15:49:50,506][00196] Environment doom_battle already registered, overwriting... [2025-02-13 15:49:50,508][00196] Environment doom_battle2 already registered, overwriting... [2025-02-13 15:49:50,510][00196] Environment doom_duel_bots already registered, overwriting... [2025-02-13 15:49:50,513][00196] Environment doom_deathmatch_bots already registered, overwriting... [2025-02-13 15:49:50,517][00196] Environment doom_duel already registered, overwriting... [2025-02-13 15:49:50,520][00196] Environment doom_deathmatch_full already registered, overwriting... [2025-02-13 15:49:50,523][00196] Environment doom_benchmark already registered, overwriting... [2025-02-13 15:49:50,526][00196] register_encoder_factory: [2025-02-13 15:49:50,549][00196] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2025-02-13 15:49:50,550][00196] Overriding arg 'device' with value 'cpu' passed from command line [2025-02-13 15:49:50,559][00196] Experiment dir /content/train_dir/default_experiment already exists! [2025-02-13 15:49:50,560][00196] Resuming existing experiment from /content/train_dir/default_experiment... [2025-02-13 15:49:50,564][00196] Weights and Biases integration disabled [2025-02-13 15:49:50,569][00196] Environment var CUDA_VISIBLE_DEVICES is [2025-02-13 15:49:54,580][00196] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=cpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=8 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 lr_adaptive_min=1e-06 lr_adaptive_max=0.01 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=6000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=6000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 6000000} git_hash=unknown git_repo_name=not a git repository [2025-02-13 15:49:54,583][00196] Saving configuration to /content/train_dir/default_experiment/config.json... [2025-02-13 15:49:54,588][00196] Rollout worker 0 uses device cpu [2025-02-13 15:49:54,590][00196] Rollout worker 1 uses device cpu [2025-02-13 15:49:54,593][00196] Rollout worker 2 uses device cpu [2025-02-13 15:49:54,595][00196] Rollout worker 3 uses device cpu [2025-02-13 15:49:54,596][00196] Rollout worker 4 uses device cpu [2025-02-13 15:49:54,597][00196] Rollout worker 5 uses device cpu [2025-02-13 15:49:54,605][00196] Rollout worker 6 uses device cpu [2025-02-13 15:49:54,606][00196] Rollout worker 7 uses device cpu [2025-02-13 15:49:54,738][00196] InferenceWorker_p0-w0: min num requests: 2 [2025-02-13 15:49:54,780][00196] Starting all processes... [2025-02-13 15:49:54,781][00196] Starting process learner_proc0 [2025-02-13 15:49:54,861][00196] Starting all processes... [2025-02-13 15:49:54,876][00196] Starting process inference_proc0-0 [2025-02-13 15:49:54,877][00196] Starting process rollout_proc0 [2025-02-13 15:49:54,879][00196] Starting process rollout_proc1 [2025-02-13 15:49:54,880][00196] Starting process rollout_proc2 [2025-02-13 15:49:54,881][00196] Starting process rollout_proc3 [2025-02-13 15:49:54,881][00196] Starting process rollout_proc4 [2025-02-13 15:49:54,881][00196] Starting process rollout_proc5 [2025-02-13 15:49:54,881][00196] Starting process rollout_proc6 [2025-02-13 15:49:54,881][00196] Starting process rollout_proc7 [2025-02-13 15:50:22,039][10146] Worker 6 uses CPU cores [0] [2025-02-13 15:50:22,141][10140] Worker 0 uses CPU cores [0] [2025-02-13 15:50:22,173][00196] Heartbeat connected on RolloutWorker_w6 [2025-02-13 15:50:22,260][10122] Starting seed is not provided [2025-02-13 15:50:22,261][10122] Initializing actor-critic model on device cpu [2025-02-13 15:50:22,262][10122] RunningMeanStd input shape: (3, 72, 128) [2025-02-13 15:50:22,265][00196] Heartbeat connected on Batcher_0 [2025-02-13 15:50:22,273][10122] RunningMeanStd input shape: (1,) [2025-02-13 15:50:22,291][00196] Heartbeat connected on RolloutWorker_w0 [2025-02-13 15:50:22,342][10142] Worker 2 uses CPU cores [0] [2025-02-13 15:50:22,378][10122] ConvEncoder: input_channels=3 [2025-02-13 15:50:22,393][10143] Worker 4 uses CPU cores [0] [2025-02-13 15:50:22,408][00196] Heartbeat connected on InferenceWorker_p0-w0 [2025-02-13 15:50:22,423][10145] Worker 5 uses CPU cores [1] [2025-02-13 15:50:22,461][00196] Heartbeat connected on RolloutWorker_w2 [2025-02-13 15:50:22,471][10141] Worker 1 uses CPU cores [1] [2025-02-13 15:50:22,481][00196] Heartbeat connected on RolloutWorker_w4 [2025-02-13 15:50:22,514][00196] Heartbeat connected on RolloutWorker_w5 [2025-02-13 15:50:22,533][00196] Heartbeat connected on RolloutWorker_w1 [2025-02-13 15:50:22,577][10144] Worker 3 uses CPU cores [1] [2025-02-13 15:50:22,594][00196] Heartbeat connected on RolloutWorker_w3 [2025-02-13 15:50:22,639][10147] Worker 7 uses CPU cores [1] [2025-02-13 15:50:22,652][00196] Heartbeat connected on RolloutWorker_w7 [2025-02-13 15:50:22,823][10122] Conv encoder output size: 512 [2025-02-13 15:50:22,823][10122] Policy head output size: 512 [2025-02-13 15:50:22,911][10122] Created Actor Critic model with architecture: [2025-02-13 15:50:22,912][10122] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2025-02-13 15:50:23,555][10122] Using optimizer [2025-02-13 15:50:31,071][10122] No checkpoints found [2025-02-13 15:50:31,072][10122] Did not load from checkpoint, starting from scratch! [2025-02-13 15:50:31,072][10122] Initialized policy 0 weights for model version 0 [2025-02-13 15:50:31,078][10139] RunningMeanStd input shape: (3, 72, 128) [2025-02-13 15:50:31,082][10139] RunningMeanStd input shape: (1,) [2025-02-13 15:50:31,083][10122] LearnerWorker_p0 finished initialization! [2025-02-13 15:50:31,084][00196] Heartbeat connected on LearnerWorker_p0 [2025-02-13 15:50:31,112][10139] ConvEncoder: input_channels=3 [2025-02-13 15:50:31,335][10139] Conv encoder output size: 512 [2025-02-13 15:50:31,335][10139] Policy head output size: 512 [2025-02-13 15:50:31,370][00196] Inference worker 0-0 is ready! [2025-02-13 15:50:31,372][00196] All inference workers are ready! Signal rollout workers to start! [2025-02-13 15:50:31,735][10142] Doom resolution: 160x120, resize resolution: (128, 72) [2025-02-13 15:50:31,738][10146] Doom resolution: 160x120, resize resolution: (128, 72) [2025-02-13 15:50:31,741][10140] Doom resolution: 160x120, resize resolution: (128, 72) [2025-02-13 15:50:31,737][10141] Doom resolution: 160x120, resize resolution: (128, 72) [2025-02-13 15:50:31,744][10143] Doom resolution: 160x120, resize resolution: (128, 72) [2025-02-13 15:50:31,754][10145] Doom resolution: 160x120, resize resolution: (128, 72) [2025-02-13 15:50:31,757][10147] Doom resolution: 160x120, resize resolution: (128, 72) [2025-02-13 15:50:31,749][10144] Doom resolution: 160x120, resize resolution: (128, 72) [2025-02-13 15:50:33,586][10144] Decorrelating experience for 0 frames... [2025-02-13 15:50:33,589][10141] Decorrelating experience for 0 frames... [2025-02-13 15:50:33,957][10143] Decorrelating experience for 0 frames... [2025-02-13 15:50:33,970][10142] Decorrelating experience for 0 frames... [2025-02-13 15:50:33,981][10140] Decorrelating experience for 0 frames... [2025-02-13 15:50:33,995][10146] Decorrelating experience for 0 frames... [2025-02-13 15:50:34,943][10141] Decorrelating experience for 32 frames... [2025-02-13 15:50:34,958][10144] Decorrelating experience for 32 frames... [2025-02-13 15:50:35,565][10147] Decorrelating experience for 0 frames... [2025-02-13 15:50:35,569][00196] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2025-02-13 15:50:35,603][10143] Decorrelating experience for 32 frames... [2025-02-13 15:50:35,630][10142] Decorrelating experience for 32 frames... [2025-02-13 15:50:35,687][10146] Decorrelating experience for 32 frames... [2025-02-13 15:50:36,534][10141] Decorrelating experience for 64 frames... [2025-02-13 15:50:36,666][10140] Decorrelating experience for 32 frames... [2025-02-13 15:50:36,772][10145] Decorrelating experience for 0 frames... [2025-02-13 15:50:38,086][10146] Decorrelating experience for 64 frames... [2025-02-13 15:50:38,502][10147] Decorrelating experience for 32 frames... [2025-02-13 15:50:38,802][10144] Decorrelating experience for 64 frames... [2025-02-13 15:50:39,021][10145] Decorrelating experience for 32 frames... [2025-02-13 15:50:39,899][10140] Decorrelating experience for 64 frames... [2025-02-13 15:50:40,570][00196] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2025-02-13 15:50:40,866][10143] Decorrelating experience for 64 frames... [2025-02-13 15:50:41,012][10146] Decorrelating experience for 96 frames... [2025-02-13 15:50:41,497][10144] Decorrelating experience for 96 frames... [2025-02-13 15:50:41,852][10147] Decorrelating experience for 64 frames... [2025-02-13 15:50:42,259][10145] Decorrelating experience for 64 frames... [2025-02-13 15:50:44,295][10142] Decorrelating experience for 64 frames... [2025-02-13 15:50:44,641][10143] Decorrelating experience for 96 frames... [2025-02-13 15:50:44,900][10140] Decorrelating experience for 96 frames... [2025-02-13 15:50:45,569][00196] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 17.8. Samples: 178. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2025-02-13 15:50:45,577][00196] Avg episode reward: [(0, '2.011')] [2025-02-13 15:50:46,456][10141] Decorrelating experience for 96 frames... [2025-02-13 15:50:46,754][10142] Decorrelating experience for 96 frames... [2025-02-13 15:50:46,816][10147] Decorrelating experience for 96 frames... [2025-02-13 15:50:48,334][10145] Decorrelating experience for 96 frames... [2025-02-13 15:50:50,569][00196] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 113.1. Samples: 1696. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2025-02-13 15:50:50,575][00196] Avg episode reward: [(0, '2.101')] [2025-02-13 15:50:52,067][10122] Signal inference workers to stop experience collection... [2025-02-13 15:50:52,131][10139] InferenceWorker_p0-w0: stopping experience collection [2025-02-13 15:50:53,819][10122] Signal inference workers to resume experience collection... [2025-02-13 15:50:53,825][10139] InferenceWorker_p0-w0: resuming experience collection [2025-02-13 15:50:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 204.8, 300 sec: 204.8). Total num frames: 4096. Throughput: 0: 142.8. Samples: 2856. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) [2025-02-13 15:50:55,576][00196] Avg episode reward: [(0, '2.264')] [2025-02-13 15:51:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 327.7, 300 sec: 327.7). Total num frames: 8192. Throughput: 0: 136.6. Samples: 3414. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) [2025-02-13 15:51:00,572][00196] Avg episode reward: [(0, '2.755')] [2025-02-13 15:51:05,570][00196] Fps is (10 sec: 819.2, 60 sec: 409.6, 300 sec: 409.6). Total num frames: 12288. Throughput: 0: 154.1. Samples: 4624. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:51:05,576][00196] Avg episode reward: [(0, '3.198')] [2025-02-13 15:51:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 468.1, 300 sec: 468.1). Total num frames: 16384. Throughput: 0: 162.9. Samples: 5702. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:51:10,580][00196] Avg episode reward: [(0, '3.369')] [2025-02-13 15:51:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 409.6, 300 sec: 409.6). Total num frames: 16384. Throughput: 0: 156.4. Samples: 6256. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:51:15,572][00196] Avg episode reward: [(0, '3.500')] [2025-02-13 15:51:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 455.1, 300 sec: 455.1). Total num frames: 20480. Throughput: 0: 169.2. Samples: 7612. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:51:20,577][00196] Avg episode reward: [(0, '3.671')] [2025-02-13 15:51:25,569][00196] Fps is (10 sec: 1228.8, 60 sec: 573.4, 300 sec: 573.4). Total num frames: 28672. Throughput: 0: 193.4. Samples: 8702. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:51:25,575][00196] Avg episode reward: [(0, '3.743')] [2025-02-13 15:51:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 521.3, 300 sec: 521.3). Total num frames: 28672. Throughput: 0: 206.2. Samples: 9458. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:51:30,573][00196] Avg episode reward: [(0, '3.876')] [2025-02-13 15:51:35,569][00196] Fps is (10 sec: 409.6, 60 sec: 546.1, 300 sec: 546.1). Total num frames: 32768. Throughput: 0: 198.7. Samples: 10638. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:51:35,575][00196] Avg episode reward: [(0, '4.054')] [2025-02-13 15:51:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 614.4, 300 sec: 567.1). Total num frames: 36864. Throughput: 0: 198.8. Samples: 11804. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:51:40,575][00196] Avg episode reward: [(0, '4.204')] [2025-02-13 15:51:40,926][10139] Updated weights for policy 0, policy_version 10 (0.1611) [2025-02-13 15:51:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 585.1). Total num frames: 40960. Throughput: 0: 199.4. Samples: 12388. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:51:45,576][00196] Avg episode reward: [(0, '4.496')] [2025-02-13 15:51:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 600.7). Total num frames: 45056. Throughput: 0: 196.6. Samples: 13472. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:51:50,578][00196] Avg episode reward: [(0, '4.489')] [2025-02-13 15:51:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 614.4). Total num frames: 49152. Throughput: 0: 203.8. Samples: 14872. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:51:55,579][00196] Avg episode reward: [(0, '4.569')] [2025-02-13 15:51:56,029][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000013_53248.pth... [2025-02-13 15:52:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 626.4). Total num frames: 53248. Throughput: 0: 209.0. Samples: 15662. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:52:00,574][00196] Avg episode reward: [(0, '4.595')] [2025-02-13 15:52:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 637.2). Total num frames: 57344. Throughput: 0: 200.0. Samples: 16610. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:52:05,576][00196] Avg episode reward: [(0, '4.572')] [2025-02-13 15:52:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 646.7). Total num frames: 61440. Throughput: 0: 204.8. Samples: 17920. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:52:10,572][00196] Avg episode reward: [(0, '4.556')] [2025-02-13 15:52:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 655.4). Total num frames: 65536. Throughput: 0: 209.2. Samples: 18872. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:52:15,572][00196] Avg episode reward: [(0, '4.546')] [2025-02-13 15:52:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 663.2). Total num frames: 69632. Throughput: 0: 205.5. Samples: 19884. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:52:20,578][00196] Avg episode reward: [(0, '4.504')] [2025-02-13 15:52:25,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 707.5). Total num frames: 77824. Throughput: 0: 208.0. Samples: 21162. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:52:25,574][00196] Avg episode reward: [(0, '4.468')] [2025-02-13 15:52:29,336][10139] Updated weights for policy 0, policy_version 20 (0.0044) [2025-02-13 15:52:30,574][00196] Fps is (10 sec: 1228.3, 60 sec: 887.4, 300 sec: 712.3). Total num frames: 81920. Throughput: 0: 212.4. Samples: 21946. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:52:30,579][00196] Avg episode reward: [(0, '4.366')] [2025-02-13 15:52:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 716.8). Total num frames: 86016. Throughput: 0: 210.6. Samples: 22950. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:52:35,578][00196] Avg episode reward: [(0, '4.359')] [2025-02-13 15:52:40,569][00196] Fps is (10 sec: 819.6, 60 sec: 887.5, 300 sec: 720.9). Total num frames: 90112. Throughput: 0: 203.1. Samples: 24010. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:52:40,579][00196] Avg episode reward: [(0, '4.346')] [2025-02-13 15:52:45,573][00196] Fps is (10 sec: 818.9, 60 sec: 887.4, 300 sec: 724.7). Total num frames: 94208. Throughput: 0: 205.9. Samples: 24930. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:52:45,576][00196] Avg episode reward: [(0, '4.267')] [2025-02-13 15:52:50,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 697.8). Total num frames: 94208. Throughput: 0: 207.5. Samples: 25948. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:52:50,572][00196] Avg episode reward: [(0, '4.264')] [2025-02-13 15:52:55,569][00196] Fps is (10 sec: 819.5, 60 sec: 887.5, 300 sec: 731.4). Total num frames: 102400. Throughput: 0: 207.7. Samples: 27268. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:52:55,577][00196] Avg episode reward: [(0, '4.381')] [2025-02-13 15:52:59,051][10122] Saving new best policy, reward=4.381! [2025-02-13 15:53:00,569][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 734.5). Total num frames: 106496. Throughput: 0: 204.8. Samples: 28086. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:53:00,576][00196] Avg episode reward: [(0, '4.351')] [2025-02-13 15:53:05,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 710.0). Total num frames: 106496. Throughput: 0: 206.4. Samples: 29170. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:53:05,573][00196] Avg episode reward: [(0, '4.319')] [2025-02-13 15:53:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 739.9). Total num frames: 114688. Throughput: 0: 200.4. Samples: 30182. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:53:10,572][00196] Avg episode reward: [(0, '4.265')] [2025-02-13 15:53:15,569][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 742.4). Total num frames: 118784. Throughput: 0: 204.0. Samples: 31124. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:53:15,575][00196] Avg episode reward: [(0, '4.249')] [2025-02-13 15:53:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 719.9). Total num frames: 118784. Throughput: 0: 203.9. Samples: 32126. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:53:20,572][00196] Avg episode reward: [(0, '4.279')] [2025-02-13 15:53:22,051][10139] Updated weights for policy 0, policy_version 30 (0.2504) [2025-02-13 15:53:25,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 722.8). Total num frames: 122880. Throughput: 0: 206.4. Samples: 33296. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:53:25,575][00196] Avg episode reward: [(0, '4.271')] [2025-02-13 15:53:30,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.3, 300 sec: 749.0). Total num frames: 131072. Throughput: 0: 203.7. Samples: 34098. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:53:30,572][00196] Avg episode reward: [(0, '4.311')] [2025-02-13 15:53:35,570][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 728.2). Total num frames: 131072. Throughput: 0: 209.6. Samples: 35380. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:53:35,574][00196] Avg episode reward: [(0, '4.312')] [2025-02-13 15:53:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 730.6). Total num frames: 135168. Throughput: 0: 202.4. Samples: 36376. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:53:40,573][00196] Avg episode reward: [(0, '4.244')] [2025-02-13 15:53:45,569][00196] Fps is (10 sec: 1228.9, 60 sec: 819.2, 300 sec: 754.5). Total num frames: 143360. Throughput: 0: 199.8. Samples: 37076. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 15:53:45,576][00196] Avg episode reward: [(0, '4.254')] [2025-02-13 15:53:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 735.2). Total num frames: 143360. Throughput: 0: 204.4. Samples: 38366. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 15:53:50,573][00196] Avg episode reward: [(0, '4.352')] [2025-02-13 15:53:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 737.3). Total num frames: 147456. Throughput: 0: 204.8. Samples: 39398. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 15:53:55,580][00196] Avg episode reward: [(0, '4.379')] [2025-02-13 15:53:55,711][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000037_151552.pth... [2025-02-13 15:54:00,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 759.3). Total num frames: 155648. Throughput: 0: 206.0. Samples: 40396. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:54:00,577][00196] Avg episode reward: [(0, '4.395')] [2025-02-13 15:54:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 741.2). Total num frames: 155648. Throughput: 0: 207.6. Samples: 41466. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:54:05,573][00196] Avg episode reward: [(0, '4.437')] [2025-02-13 15:54:05,885][10122] Saving new best policy, reward=4.395! [2025-02-13 15:54:10,570][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 743.0). Total num frames: 159744. Throughput: 0: 204.5. Samples: 42500. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:54:10,573][00196] Avg episode reward: [(0, '4.478')] [2025-02-13 15:54:11,413][10122] Saving new best policy, reward=4.437! [2025-02-13 15:54:11,454][10139] Updated weights for policy 0, policy_version 40 (0.2000) [2025-02-13 15:54:15,574][00196] Fps is (10 sec: 818.8, 60 sec: 750.9, 300 sec: 744.7). Total num frames: 163840. Throughput: 0: 190.7. Samples: 42680. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:54:15,589][00196] Avg episode reward: [(0, '4.495')] [2025-02-13 15:54:20,048][10122] Saving new best policy, reward=4.478! [2025-02-13 15:54:20,204][10122] Saving new best policy, reward=4.495! [2025-02-13 15:54:20,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 746.4). Total num frames: 167936. Throughput: 0: 183.1. Samples: 43618. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:54:20,574][00196] Avg episode reward: [(0, '4.541')] [2025-02-13 15:54:25,569][00196] Fps is (10 sec: 409.8, 60 sec: 750.9, 300 sec: 730.2). Total num frames: 167936. Throughput: 0: 183.7. Samples: 44642. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:54:25,576][00196] Avg episode reward: [(0, '4.517')] [2025-02-13 15:54:25,683][10122] Saving new best policy, reward=4.541! [2025-02-13 15:54:30,569][00196] Fps is (10 sec: 819.4, 60 sec: 750.9, 300 sec: 749.5). Total num frames: 176128. Throughput: 0: 190.4. Samples: 45646. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:54:30,576][00196] Avg episode reward: [(0, '4.533')] [2025-02-13 15:54:35,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 750.9). Total num frames: 180224. Throughput: 0: 184.9. Samples: 46688. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 15:54:35,574][00196] Avg episode reward: [(0, '4.473')] [2025-02-13 15:54:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 735.6). Total num frames: 180224. Throughput: 0: 184.1. Samples: 47684. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 15:54:40,575][00196] Avg episode reward: [(0, '4.396')] [2025-02-13 15:54:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 737.3). Total num frames: 184320. Throughput: 0: 177.5. Samples: 48384. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 15:54:45,576][00196] Avg episode reward: [(0, '4.418')] [2025-02-13 15:54:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 738.9). Total num frames: 188416. Throughput: 0: 185.2. Samples: 49800. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 15:54:50,576][00196] Avg episode reward: [(0, '4.358')] [2025-02-13 15:54:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 740.4). Total num frames: 192512. Throughput: 0: 183.3. Samples: 50750. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 15:54:55,578][00196] Avg episode reward: [(0, '4.374')] [2025-02-13 15:55:00,570][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 741.9). Total num frames: 196608. Throughput: 0: 193.5. Samples: 51388. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 15:55:00,578][00196] Avg episode reward: [(0, '4.456')] [2025-02-13 15:55:05,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 758.5). Total num frames: 204800. Throughput: 0: 205.2. Samples: 52850. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 15:55:05,575][00196] Avg episode reward: [(0, '4.425')] [2025-02-13 15:55:05,876][10139] Updated weights for policy 0, policy_version 50 (0.1898) [2025-02-13 15:55:10,385][10122] Signal inference workers to stop experience collection... (50 times) [2025-02-13 15:55:10,471][10139] InferenceWorker_p0-w0: stopping experience collection (50 times) [2025-02-13 15:55:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 744.7). Total num frames: 204800. Throughput: 0: 203.7. Samples: 53808. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 15:55:10,572][00196] Avg episode reward: [(0, '4.414')] [2025-02-13 15:55:11,442][10122] Signal inference workers to resume experience collection... (50 times) [2025-02-13 15:55:11,443][10139] InferenceWorker_p0-w0: resuming experience collection (50 times) [2025-02-13 15:55:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 751.0, 300 sec: 746.1). Total num frames: 208896. Throughput: 0: 195.6. Samples: 54448. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 15:55:15,573][00196] Avg episode reward: [(0, '4.432')] [2025-02-13 15:55:20,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 761.7). Total num frames: 217088. Throughput: 0: 203.8. Samples: 55860. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 15:55:20,571][00196] Avg episode reward: [(0, '4.379')] [2025-02-13 15:55:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 748.6). Total num frames: 217088. Throughput: 0: 204.1. Samples: 56868. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 15:55:25,574][00196] Avg episode reward: [(0, '4.366')] [2025-02-13 15:55:30,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 221184. Throughput: 0: 202.6. Samples: 57502. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:55:30,578][00196] Avg episode reward: [(0, '4.363')] [2025-02-13 15:55:35,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 229376. Throughput: 0: 201.9. Samples: 58884. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:55:35,576][00196] Avg episode reward: [(0, '4.371')] [2025-02-13 15:55:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 229376. Throughput: 0: 203.9. Samples: 59924. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:55:40,572][00196] Avg episode reward: [(0, '4.451')] [2025-02-13 15:55:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 233472. Throughput: 0: 202.8. Samples: 60514. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:55:45,578][00196] Avg episode reward: [(0, '4.438')] [2025-02-13 15:55:50,569][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 805.3). Total num frames: 241664. Throughput: 0: 202.2. Samples: 61948. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:55:50,577][00196] Avg episode reward: [(0, '4.376')] [2025-02-13 15:55:55,544][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000060_245760.pth... [2025-02-13 15:55:55,561][10139] Updated weights for policy 0, policy_version 60 (0.2083) [2025-02-13 15:55:55,569][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 805.3). Total num frames: 245760. Throughput: 0: 203.3. Samples: 62956. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:55:55,574][00196] Avg episode reward: [(0, '4.447')] [2025-02-13 15:55:55,653][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000013_53248.pth [2025-02-13 15:56:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 805.3). Total num frames: 249856. Throughput: 0: 209.2. Samples: 63860. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:56:00,574][00196] Avg episode reward: [(0, '4.463')] [2025-02-13 15:56:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 253952. Throughput: 0: 202.4. Samples: 64970. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:56:05,578][00196] Avg episode reward: [(0, '4.470')] [2025-02-13 15:56:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 819.2). Total num frames: 258048. Throughput: 0: 202.8. Samples: 65994. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 15:56:10,577][00196] Avg episode reward: [(0, '4.496')] [2025-02-13 15:56:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 819.2). Total num frames: 262144. Throughput: 0: 206.5. Samples: 66796. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:56:15,575][00196] Avg episode reward: [(0, '4.572')] [2025-02-13 15:56:20,113][10122] Saving new best policy, reward=4.572! [2025-02-13 15:56:20,573][00196] Fps is (10 sec: 818.9, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 266240. Throughput: 0: 202.5. Samples: 67998. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:56:20,575][00196] Avg episode reward: [(0, '4.670')] [2025-02-13 15:56:25,426][10122] Saving new best policy, reward=4.670! [2025-02-13 15:56:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 819.2). Total num frames: 270336. Throughput: 0: 201.3. Samples: 68984. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:56:25,574][00196] Avg episode reward: [(0, '4.656')] [2025-02-13 15:56:30,570][00196] Fps is (10 sec: 409.7, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 270336. Throughput: 0: 205.1. Samples: 69744. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:56:30,577][00196] Avg episode reward: [(0, '4.729')] [2025-02-13 15:56:35,328][10122] Saving new best policy, reward=4.729! [2025-02-13 15:56:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 278528. Throughput: 0: 201.8. Samples: 71028. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:56:35,571][00196] Avg episode reward: [(0, '4.677')] [2025-02-13 15:56:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 278528. Throughput: 0: 201.2. Samples: 72008. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:56:40,573][00196] Avg episode reward: [(0, '4.681')] [2025-02-13 15:56:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 282624. Throughput: 0: 195.5. Samples: 72656. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:56:45,581][00196] Avg episode reward: [(0, '4.651')] [2025-02-13 15:56:45,841][10139] Updated weights for policy 0, policy_version 70 (0.0693) [2025-02-13 15:56:50,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 290816. Throughput: 0: 201.5. Samples: 74036. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:56:50,574][00196] Avg episode reward: [(0, '4.565')] [2025-02-13 15:56:55,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 294912. Throughput: 0: 203.6. Samples: 75158. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:56:55,575][00196] Avg episode reward: [(0, '4.569')] [2025-02-13 15:57:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 299008. Throughput: 0: 200.1. Samples: 75800. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:00,573][00196] Avg episode reward: [(0, '4.621')] [2025-02-13 15:57:05,573][00196] Fps is (10 sec: 818.9, 60 sec: 819.1, 300 sec: 819.2). Total num frames: 303104. Throughput: 0: 203.4. Samples: 77150. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:05,576][00196] Avg episode reward: [(0, '4.497')] [2025-02-13 15:57:10,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 307200. Throughput: 0: 210.1. Samples: 78440. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:10,576][00196] Avg episode reward: [(0, '4.517')] [2025-02-13 15:57:15,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 311296. Throughput: 0: 207.0. Samples: 79060. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:15,576][00196] Avg episode reward: [(0, '4.531')] [2025-02-13 15:57:20,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 315392. Throughput: 0: 204.2. Samples: 80216. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:20,574][00196] Avg episode reward: [(0, '4.513')] [2025-02-13 15:57:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 319488. Throughput: 0: 210.4. Samples: 81474. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:25,575][00196] Avg episode reward: [(0, '4.538')] [2025-02-13 15:57:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 805.3). Total num frames: 323584. Throughput: 0: 209.2. Samples: 82070. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:30,572][00196] Avg episode reward: [(0, '4.495')] [2025-02-13 15:57:34,561][10139] Updated weights for policy 0, policy_version 80 (0.1009) [2025-02-13 15:57:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 327680. Throughput: 0: 203.9. Samples: 83210. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:35,573][00196] Avg episode reward: [(0, '4.510')] [2025-02-13 15:57:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 805.3). Total num frames: 331776. Throughput: 0: 215.7. Samples: 84866. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:40,575][00196] Avg episode reward: [(0, '4.524')] [2025-02-13 15:57:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 819.2). Total num frames: 335872. Throughput: 0: 210.3. Samples: 85262. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:45,577][00196] Avg episode reward: [(0, '4.517')] [2025-02-13 15:57:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 339968. Throughput: 0: 204.2. Samples: 86338. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:50,577][00196] Avg episode reward: [(0, '4.473')] [2025-02-13 15:57:53,244][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000084_344064.pth... [2025-02-13 15:57:53,364][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000037_151552.pth [2025-02-13 15:57:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 344064. Throughput: 0: 216.6. Samples: 88186. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:57:55,577][00196] Avg episode reward: [(0, '4.387')] [2025-02-13 15:58:00,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 348160. Throughput: 0: 207.0. Samples: 88376. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:00,578][00196] Avg episode reward: [(0, '4.387')] [2025-02-13 15:58:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 805.3). Total num frames: 352256. Throughput: 0: 208.6. Samples: 89604. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:05,574][00196] Avg episode reward: [(0, '4.325')] [2025-02-13 15:58:10,570][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 356352. Throughput: 0: 222.2. Samples: 91474. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:10,581][00196] Avg episode reward: [(0, '4.408')] [2025-02-13 15:58:15,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 360448. Throughput: 0: 212.6. Samples: 91638. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:15,572][00196] Avg episode reward: [(0, '4.425')] [2025-02-13 15:58:20,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 364544. Throughput: 0: 215.5. Samples: 92906. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:20,573][00196] Avg episode reward: [(0, '4.418')] [2025-02-13 15:58:22,649][10139] Updated weights for policy 0, policy_version 90 (0.0971) [2025-02-13 15:58:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 368640. Throughput: 0: 214.2. Samples: 94506. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:25,573][00196] Avg episode reward: [(0, '4.464')] [2025-02-13 15:58:30,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 372736. Throughput: 0: 211.4. Samples: 94774. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:30,576][00196] Avg episode reward: [(0, '4.520')] [2025-02-13 15:58:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 376832. Throughput: 0: 211.8. Samples: 95870. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:35,572][00196] Avg episode reward: [(0, '4.529')] [2025-02-13 15:58:40,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 380928. Throughput: 0: 206.8. Samples: 97494. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:40,573][00196] Avg episode reward: [(0, '4.583')] [2025-02-13 15:58:45,571][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 385024. Throughput: 0: 213.3. Samples: 97974. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:45,575][00196] Avg episode reward: [(0, '4.595')] [2025-02-13 15:58:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 389120. Throughput: 0: 208.8. Samples: 99000. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:50,572][00196] Avg episode reward: [(0, '4.579')] [2025-02-13 15:58:55,575][00196] Fps is (10 sec: 818.9, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 393216. Throughput: 0: 203.8. Samples: 100648. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:58:55,581][00196] Avg episode reward: [(0, '4.631')] [2025-02-13 15:59:00,573][00196] Fps is (10 sec: 818.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 397312. Throughput: 0: 212.4. Samples: 101198. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:59:00,582][00196] Avg episode reward: [(0, '4.580')] [2025-02-13 15:59:05,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 401408. Throughput: 0: 205.5. Samples: 102154. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:59:05,578][00196] Avg episode reward: [(0, '4.613')] [2025-02-13 15:59:10,570][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 405504. Throughput: 0: 205.6. Samples: 103760. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:59:10,578][00196] Avg episode reward: [(0, '4.594')] [2025-02-13 15:59:11,774][10139] Updated weights for policy 0, policy_version 100 (0.0047) [2025-02-13 15:59:14,757][10122] Signal inference workers to stop experience collection... (100 times) [2025-02-13 15:59:14,859][10139] InferenceWorker_p0-w0: stopping experience collection (100 times) [2025-02-13 15:59:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 409600. Throughput: 0: 214.3. Samples: 104416. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:59:15,571][00196] Avg episode reward: [(0, '4.571')] [2025-02-13 15:59:17,101][10122] Signal inference workers to resume experience collection... (100 times) [2025-02-13 15:59:17,102][10139] InferenceWorker_p0-w0: resuming experience collection (100 times) [2025-02-13 15:59:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 413696. Throughput: 0: 210.4. Samples: 105340. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:59:20,574][00196] Avg episode reward: [(0, '4.524')] [2025-02-13 15:59:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 417792. Throughput: 0: 208.0. Samples: 106854. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:59:25,578][00196] Avg episode reward: [(0, '4.462')] [2025-02-13 15:59:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 421888. Throughput: 0: 211.5. Samples: 107490. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:59:30,576][00196] Avg episode reward: [(0, '4.482')] [2025-02-13 15:59:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 425984. Throughput: 0: 211.5. Samples: 108518. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:59:35,574][00196] Avg episode reward: [(0, '4.585')] [2025-02-13 15:59:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 430080. Throughput: 0: 206.4. Samples: 109936. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 15:59:40,576][00196] Avg episode reward: [(0, '4.661')] [2025-02-13 15:59:45,569][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 847.0). Total num frames: 438272. Throughput: 0: 211.7. Samples: 110724. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:59:45,573][00196] Avg episode reward: [(0, '4.519')] [2025-02-13 15:59:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 438272. Throughput: 0: 213.1. Samples: 111742. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:59:50,572][00196] Avg episode reward: [(0, '4.554')] [2025-02-13 15:59:55,570][00196] Fps is (10 sec: 409.6, 60 sec: 819.3, 300 sec: 833.1). Total num frames: 442368. Throughput: 0: 205.5. Samples: 113006. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 15:59:55,577][00196] Avg episode reward: [(0, '4.515')] [2025-02-13 15:59:55,941][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000109_446464.pth... [2025-02-13 15:59:56,069][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000060_245760.pth [2025-02-13 16:00:00,549][10139] Updated weights for policy 0, policy_version 110 (0.0965) [2025-02-13 16:00:00,570][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 833.1). Total num frames: 450560. Throughput: 0: 211.7. Samples: 113942. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:00:00,575][00196] Avg episode reward: [(0, '4.520')] [2025-02-13 16:00:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 450560. Throughput: 0: 208.0. Samples: 114700. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:00:05,575][00196] Avg episode reward: [(0, '4.539')] [2025-02-13 16:00:10,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 454656. Throughput: 0: 205.4. Samples: 116096. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:00:10,573][00196] Avg episode reward: [(0, '4.512')] [2025-02-13 16:00:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 458752. Throughput: 0: 206.1. Samples: 116764. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:00:15,577][00196] Avg episode reward: [(0, '4.505')] [2025-02-13 16:00:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 462848. Throughput: 0: 210.3. Samples: 117982. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:00:20,573][00196] Avg episode reward: [(0, '4.430')] [2025-02-13 16:00:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 466944. Throughput: 0: 206.3. Samples: 119218. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:00:25,573][00196] Avg episode reward: [(0, '4.436')] [2025-02-13 16:00:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 471040. Throughput: 0: 204.0. Samples: 119902. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:00:30,578][00196] Avg episode reward: [(0, '4.383')] [2025-02-13 16:00:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 475136. Throughput: 0: 207.2. Samples: 121066. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:00:35,577][00196] Avg episode reward: [(0, '4.410')] [2025-02-13 16:00:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 479232. Throughput: 0: 205.4. Samples: 122248. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:00:40,580][00196] Avg episode reward: [(0, '4.443')] [2025-02-13 16:00:45,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 819.2). Total num frames: 483328. Throughput: 0: 202.5. Samples: 123056. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:00:45,574][00196] Avg episode reward: [(0, '4.459')] [2025-02-13 16:00:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 487424. Throughput: 0: 211.9. Samples: 124234. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:00:50,572][00196] Avg episode reward: [(0, '4.426')] [2025-02-13 16:00:51,203][10139] Updated weights for policy 0, policy_version 120 (0.1177) [2025-02-13 16:00:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 491520. Throughput: 0: 203.7. Samples: 125262. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:00:55,572][00196] Avg episode reward: [(0, '4.481')] [2025-02-13 16:01:00,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 499712. Throughput: 0: 210.5. Samples: 126238. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:01:00,572][00196] Avg episode reward: [(0, '4.472')] [2025-02-13 16:01:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 499712. Throughput: 0: 207.7. Samples: 127328. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:01:05,573][00196] Avg episode reward: [(0, '4.446')] [2025-02-13 16:01:10,570][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 503808. Throughput: 0: 203.6. Samples: 128382. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:01:10,574][00196] Avg episode reward: [(0, '4.527')] [2025-02-13 16:01:15,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 507904. Throughput: 0: 204.1. Samples: 129086. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:01:15,578][00196] Avg episode reward: [(0, '4.582')] [2025-02-13 16:01:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 512000. Throughput: 0: 203.3. Samples: 130216. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:01:20,575][00196] Avg episode reward: [(0, '4.611')] [2025-02-13 16:01:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 516096. Throughput: 0: 201.6. Samples: 131320. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:01:25,577][00196] Avg episode reward: [(0, '4.683')] [2025-02-13 16:01:30,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 520192. Throughput: 0: 196.4. Samples: 131896. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:01:30,576][00196] Avg episode reward: [(0, '4.669')] [2025-02-13 16:01:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 524288. Throughput: 0: 203.5. Samples: 133390. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:01:35,575][00196] Avg episode reward: [(0, '4.627')] [2025-02-13 16:01:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 528384. Throughput: 0: 200.3. Samples: 134274. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:01:40,572][00196] Avg episode reward: [(0, '4.724')] [2025-02-13 16:01:42,352][10139] Updated weights for policy 0, policy_version 130 (0.1132) [2025-02-13 16:01:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 532480. Throughput: 0: 193.9. Samples: 134964. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:01:45,578][00196] Avg episode reward: [(0, '4.681')] [2025-02-13 16:01:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 536576. Throughput: 0: 200.9. Samples: 136370. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:01:50,577][00196] Avg episode reward: [(0, '4.714')] [2025-02-13 16:01:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 540672. Throughput: 0: 198.9. Samples: 137332. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:01:55,573][00196] Avg episode reward: [(0, '4.782')] [2025-02-13 16:01:58,027][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000133_544768.pth... [2025-02-13 16:01:58,158][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000084_344064.pth [2025-02-13 16:01:58,172][10122] Saving new best policy, reward=4.782! [2025-02-13 16:02:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 819.2). Total num frames: 544768. Throughput: 0: 194.8. Samples: 137852. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:02:00,577][00196] Avg episode reward: [(0, '4.763')] [2025-02-13 16:02:05,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 548864. Throughput: 0: 203.2. Samples: 139360. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:02:05,578][00196] Avg episode reward: [(0, '4.749')] [2025-02-13 16:02:10,575][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 819.2). Total num frames: 552960. Throughput: 0: 199.2. Samples: 140286. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:02:10,584][00196] Avg episode reward: [(0, '4.707')] [2025-02-13 16:02:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 557056. Throughput: 0: 197.4. Samples: 140780. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:02:15,574][00196] Avg episode reward: [(0, '4.720')] [2025-02-13 16:02:20,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 561152. Throughput: 0: 198.4. Samples: 142316. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:02:20,575][00196] Avg episode reward: [(0, '4.670')] [2025-02-13 16:02:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 565248. Throughput: 0: 191.3. Samples: 142884. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:02:25,572][00196] Avg episode reward: [(0, '4.629')] [2025-02-13 16:02:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 569344. Throughput: 0: 198.5. Samples: 143896. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:02:30,572][00196] Avg episode reward: [(0, '4.598')] [2025-02-13 16:02:33,139][10139] Updated weights for policy 0, policy_version 140 (0.0583) [2025-02-13 16:02:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 573440. Throughput: 0: 200.1. Samples: 145376. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:02:35,573][00196] Avg episode reward: [(0, '4.742')] [2025-02-13 16:02:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 577536. Throughput: 0: 206.7. Samples: 146634. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:02:40,576][00196] Avg episode reward: [(0, '4.726')] [2025-02-13 16:02:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 581632. Throughput: 0: 203.5. Samples: 147008. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:02:45,572][00196] Avg episode reward: [(0, '4.722')] [2025-02-13 16:02:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 585728. Throughput: 0: 202.4. Samples: 148468. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:02:50,573][00196] Avg episode reward: [(0, '4.680')] [2025-02-13 16:02:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 589824. Throughput: 0: 212.5. Samples: 149848. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:02:55,575][00196] Avg episode reward: [(0, '4.677')] [2025-02-13 16:03:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 593920. Throughput: 0: 207.2. Samples: 150104. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:03:00,573][00196] Avg episode reward: [(0, '4.595')] [2025-02-13 16:03:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 598016. Throughput: 0: 204.3. Samples: 151508. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:03:05,576][00196] Avg episode reward: [(0, '4.599')] [2025-02-13 16:03:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 819.2). Total num frames: 602112. Throughput: 0: 219.5. Samples: 152762. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:03:10,577][00196] Avg episode reward: [(0, '4.638')] [2025-02-13 16:03:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 606208. Throughput: 0: 204.8. Samples: 153114. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:03:15,574][00196] Avg episode reward: [(0, '4.568')] [2025-02-13 16:03:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 610304. Throughput: 0: 202.4. Samples: 154486. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:03:20,576][00196] Avg episode reward: [(0, '4.555')] [2025-02-13 16:03:23,329][10139] Updated weights for policy 0, policy_version 150 (0.0541) [2025-02-13 16:03:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 614400. Throughput: 0: 202.8. Samples: 155760. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:03:25,572][00196] Avg episode reward: [(0, '4.473')] [2025-02-13 16:03:27,939][10122] Signal inference workers to stop experience collection... (150 times) [2025-02-13 16:03:28,058][10139] InferenceWorker_p0-w0: stopping experience collection (150 times) [2025-02-13 16:03:29,964][10122] Signal inference workers to resume experience collection... (150 times) [2025-02-13 16:03:29,965][10139] InferenceWorker_p0-w0: resuming experience collection (150 times) [2025-02-13 16:03:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 618496. Throughput: 0: 205.1. Samples: 156238. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:03:30,576][00196] Avg episode reward: [(0, '4.489')] [2025-02-13 16:03:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 622592. Throughput: 0: 195.4. Samples: 157262. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:03:35,573][00196] Avg episode reward: [(0, '4.512')] [2025-02-13 16:03:40,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 626688. Throughput: 0: 194.1. Samples: 158582. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:03:40,573][00196] Avg episode reward: [(0, '4.629')] [2025-02-13 16:03:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 626688. Throughput: 0: 200.4. Samples: 159122. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:03:45,576][00196] Avg episode reward: [(0, '4.665')] [2025-02-13 16:03:50,570][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 630784. Throughput: 0: 196.4. Samples: 160344. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:03:50,579][00196] Avg episode reward: [(0, '4.679')] [2025-02-13 16:03:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 634880. Throughput: 0: 191.4. Samples: 161374. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:03:55,572][00196] Avg episode reward: [(0, '4.684')] [2025-02-13 16:03:55,737][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000156_638976.pth... [2025-02-13 16:03:55,849][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000109_446464.pth [2025-02-13 16:04:00,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 638976. Throughput: 0: 197.2. Samples: 161986. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:04:00,577][00196] Avg episode reward: [(0, '4.736')] [2025-02-13 16:04:05,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 643072. Throughput: 0: 191.9. Samples: 163120. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:04:05,576][00196] Avg episode reward: [(0, '4.701')] [2025-02-13 16:04:10,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 647168. Throughput: 0: 192.7. Samples: 164430. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:04:10,575][00196] Avg episode reward: [(0, '4.765')] [2025-02-13 16:04:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 651264. Throughput: 0: 192.0. Samples: 164880. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:04:15,573][00196] Avg episode reward: [(0, '4.705')] [2025-02-13 16:04:18,718][10139] Updated weights for policy 0, policy_version 160 (0.1117) [2025-02-13 16:04:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 655360. Throughput: 0: 188.4. Samples: 165738. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:04:20,573][00196] Avg episode reward: [(0, '4.682')] [2025-02-13 16:04:25,570][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 655360. Throughput: 0: 176.6. Samples: 166528. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:04:25,582][00196] Avg episode reward: [(0, '4.686')] [2025-02-13 16:04:30,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 659456. Throughput: 0: 169.2. Samples: 166738. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:04:30,580][00196] Avg episode reward: [(0, '4.669')] [2025-02-13 16:04:35,570][00196] Fps is (10 sec: 409.6, 60 sec: 614.4, 300 sec: 777.5). Total num frames: 659456. Throughput: 0: 157.9. Samples: 167448. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:04:35,574][00196] Avg episode reward: [(0, '4.616')] [2025-02-13 16:04:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 614.4, 300 sec: 763.7). Total num frames: 663552. Throughput: 0: 157.0. Samples: 168438. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:04:40,577][00196] Avg episode reward: [(0, '4.579')] [2025-02-13 16:04:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 777.5). Total num frames: 667648. Throughput: 0: 160.7. Samples: 169216. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:04:45,573][00196] Avg episode reward: [(0, '4.563')] [2025-02-13 16:04:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 777.5). Total num frames: 671744. Throughput: 0: 156.4. Samples: 170158. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:04:50,574][00196] Avg episode reward: [(0, '4.522')] [2025-02-13 16:04:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 763.7). Total num frames: 675840. Throughput: 0: 156.4. Samples: 171470. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:04:55,576][00196] Avg episode reward: [(0, '4.383')] [2025-02-13 16:05:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 777.5). Total num frames: 679936. Throughput: 0: 155.1. Samples: 171858. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:05:00,572][00196] Avg episode reward: [(0, '4.440')] [2025-02-13 16:05:05,570][00196] Fps is (10 sec: 819.1, 60 sec: 682.7, 300 sec: 777.5). Total num frames: 684032. Throughput: 0: 160.8. Samples: 172976. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:05:05,579][00196] Avg episode reward: [(0, '4.437')] [2025-02-13 16:05:10,570][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 777.5). Total num frames: 688128. Throughput: 0: 165.0. Samples: 173952. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:05:10,573][00196] Avg episode reward: [(0, '4.421')] [2025-02-13 16:05:15,569][00196] Fps is (10 sec: 819.3, 60 sec: 682.7, 300 sec: 777.5). Total num frames: 692224. Throughput: 0: 173.8. Samples: 174560. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:05:15,576][00196] Avg episode reward: [(0, '4.475')] [2025-02-13 16:05:18,778][10139] Updated weights for policy 0, policy_version 170 (0.2894) [2025-02-13 16:05:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 777.5). Total num frames: 696320. Throughput: 0: 184.1. Samples: 175732. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:05:20,574][00196] Avg episode reward: [(0, '4.478')] [2025-02-13 16:05:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 700416. Throughput: 0: 182.9. Samples: 176670. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:05:25,575][00196] Avg episode reward: [(0, '4.563')] [2025-02-13 16:05:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 704512. Throughput: 0: 186.8. Samples: 177624. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:05:30,572][00196] Avg episode reward: [(0, '4.511')] [2025-02-13 16:05:35,579][00196] Fps is (10 sec: 818.4, 60 sec: 819.1, 300 sec: 777.5). Total num frames: 708608. Throughput: 0: 192.0. Samples: 178798. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:05:35,584][00196] Avg episode reward: [(0, '4.597')] [2025-02-13 16:05:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 712704. Throughput: 0: 186.0. Samples: 179842. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:05:40,576][00196] Avg episode reward: [(0, '4.614')] [2025-02-13 16:05:45,569][00196] Fps is (10 sec: 820.0, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 716800. Throughput: 0: 197.2. Samples: 180732. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:05:45,573][00196] Avg episode reward: [(0, '4.691')] [2025-02-13 16:05:50,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 720896. Throughput: 0: 198.3. Samples: 181900. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:05:50,576][00196] Avg episode reward: [(0, '4.695')] [2025-02-13 16:05:55,123][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000177_724992.pth... [2025-02-13 16:05:55,256][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000133_544768.pth [2025-02-13 16:05:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 724992. Throughput: 0: 196.1. Samples: 182776. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:05:55,577][00196] Avg episode reward: [(0, '4.682')] [2025-02-13 16:06:00,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 729088. Throughput: 0: 202.0. Samples: 183648. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:06:00,572][00196] Avg episode reward: [(0, '4.753')] [2025-02-13 16:06:05,574][00196] Fps is (10 sec: 818.9, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 733184. Throughput: 0: 203.8. Samples: 184904. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:06:05,577][00196] Avg episode reward: [(0, '4.836')] [2025-02-13 16:06:10,244][10122] Saving new best policy, reward=4.836! [2025-02-13 16:06:10,255][10139] Updated weights for policy 0, policy_version 180 (0.1034) [2025-02-13 16:06:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 737280. Throughput: 0: 203.6. Samples: 185832. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:06:10,573][00196] Avg episode reward: [(0, '4.784')] [2025-02-13 16:06:15,570][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 741376. Throughput: 0: 202.8. Samples: 186752. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:06:15,573][00196] Avg episode reward: [(0, '4.693')] [2025-02-13 16:06:20,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 745472. Throughput: 0: 204.3. Samples: 187990. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:06:20,577][00196] Avg episode reward: [(0, '4.782')] [2025-02-13 16:06:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 749568. Throughput: 0: 201.2. Samples: 188894. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:06:25,575][00196] Avg episode reward: [(0, '4.752')] [2025-02-13 16:06:30,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 753664. Throughput: 0: 201.8. Samples: 189812. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:06:30,576][00196] Avg episode reward: [(0, '4.778')] [2025-02-13 16:06:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 777.5). Total num frames: 757760. Throughput: 0: 203.8. Samples: 191072. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:06:35,579][00196] Avg episode reward: [(0, '4.863')] [2025-02-13 16:06:39,738][10122] Saving new best policy, reward=4.863! [2025-02-13 16:06:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 761856. Throughput: 0: 206.0. Samples: 192046. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:06:40,572][00196] Avg episode reward: [(0, '4.843')] [2025-02-13 16:06:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 765952. Throughput: 0: 206.3. Samples: 192930. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:06:45,575][00196] Avg episode reward: [(0, '5.052')] [2025-02-13 16:06:48,976][10122] Saving new best policy, reward=5.052! [2025-02-13 16:06:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 770048. Throughput: 0: 204.6. Samples: 194110. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:06:50,583][00196] Avg episode reward: [(0, '4.984')] [2025-02-13 16:06:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 774144. Throughput: 0: 202.3. Samples: 194936. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:06:55,573][00196] Avg episode reward: [(0, '4.990')] [2025-02-13 16:06:59,435][10139] Updated weights for policy 0, policy_version 190 (0.0673) [2025-02-13 16:07:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 778240. Throughput: 0: 206.1. Samples: 196026. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:07:00,572][00196] Avg episode reward: [(0, '4.982')] [2025-02-13 16:07:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 777.6). Total num frames: 782336. Throughput: 0: 204.4. Samples: 197188. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:07:05,572][00196] Avg episode reward: [(0, '5.073')] [2025-02-13 16:07:09,861][10122] Saving new best policy, reward=5.073! [2025-02-13 16:07:10,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 786432. Throughput: 0: 207.2. Samples: 198218. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:07:10,579][00196] Avg episode reward: [(0, '5.159')] [2025-02-13 16:07:15,000][10122] Saving new best policy, reward=5.159! [2025-02-13 16:07:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 790528. Throughput: 0: 207.5. Samples: 199150. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:07:15,579][00196] Avg episode reward: [(0, '5.160')] [2025-02-13 16:07:19,438][10122] Saving new best policy, reward=5.160! [2025-02-13 16:07:20,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 794624. Throughput: 0: 201.1. Samples: 200122. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:07:20,578][00196] Avg episode reward: [(0, '5.065')] [2025-02-13 16:07:25,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 794624. Throughput: 0: 202.1. Samples: 201142. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:07:25,575][00196] Avg episode reward: [(0, '5.018')] [2025-02-13 16:07:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 802816. Throughput: 0: 203.6. Samples: 202092. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:07:30,573][00196] Avg episode reward: [(0, '4.978')] [2025-02-13 16:07:35,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 806912. Throughput: 0: 201.6. Samples: 203182. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:07:35,574][00196] Avg episode reward: [(0, '5.021')] [2025-02-13 16:07:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 806912. Throughput: 0: 205.4. Samples: 204180. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:07:40,576][00196] Avg episode reward: [(0, '5.001')] [2025-02-13 16:07:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 815104. Throughput: 0: 196.7. Samples: 204878. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:07:45,583][00196] Avg episode reward: [(0, '4.969')] [2025-02-13 16:07:50,233][10139] Updated weights for policy 0, policy_version 200 (0.1117) [2025-02-13 16:07:50,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 819200. Throughput: 0: 199.8. Samples: 206180. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:07:50,571][00196] Avg episode reward: [(0, '5.047')] [2025-02-13 16:07:54,696][10122] Signal inference workers to stop experience collection... (200 times) [2025-02-13 16:07:54,809][10139] InferenceWorker_p0-w0: stopping experience collection (200 times) [2025-02-13 16:07:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 819200. Throughput: 0: 200.9. Samples: 207260. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:07:55,577][00196] Avg episode reward: [(0, '5.062')] [2025-02-13 16:07:56,032][10122] Signal inference workers to resume experience collection... (200 times) [2025-02-13 16:07:56,034][10139] InferenceWorker_p0-w0: resuming experience collection (200 times) [2025-02-13 16:07:56,042][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000201_823296.pth... [2025-02-13 16:07:56,167][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000156_638976.pth [2025-02-13 16:08:00,570][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 823296. Throughput: 0: 197.3. Samples: 208028. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:08:00,578][00196] Avg episode reward: [(0, '5.026')] [2025-02-13 16:08:05,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 831488. Throughput: 0: 202.8. Samples: 209248. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:08:05,577][00196] Avg episode reward: [(0, '5.036')] [2025-02-13 16:08:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 831488. Throughput: 0: 204.0. Samples: 210320. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:08:10,572][00196] Avg episode reward: [(0, '4.985')] [2025-02-13 16:08:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 835584. Throughput: 0: 199.1. Samples: 211052. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:08:15,578][00196] Avg episode reward: [(0, '5.037')] [2025-02-13 16:08:20,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 843776. Throughput: 0: 205.4. Samples: 212424. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:08:20,579][00196] Avg episode reward: [(0, '5.111')] [2025-02-13 16:08:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 843776. Throughput: 0: 205.6. Samples: 213432. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:08:25,573][00196] Avg episode reward: [(0, '5.072')] [2025-02-13 16:08:30,570][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 847872. Throughput: 0: 204.0. Samples: 214060. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:08:30,576][00196] Avg episode reward: [(0, '5.059')] [2025-02-13 16:08:35,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 856064. Throughput: 0: 206.8. Samples: 215486. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:08:35,572][00196] Avg episode reward: [(0, '5.085')] [2025-02-13 16:08:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 856064. Throughput: 0: 206.5. Samples: 216552. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:08:40,577][00196] Avg episode reward: [(0, '5.016')] [2025-02-13 16:08:40,833][10139] Updated weights for policy 0, policy_version 210 (0.0638) [2025-02-13 16:08:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 860160. Throughput: 0: 200.8. Samples: 217062. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:08:45,572][00196] Avg episode reward: [(0, '5.051')] [2025-02-13 16:08:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 864256. Throughput: 0: 208.7. Samples: 218638. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:08:50,578][00196] Avg episode reward: [(0, '5.062')] [2025-02-13 16:08:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 868352. Throughput: 0: 207.9. Samples: 219676. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:08:55,572][00196] Avg episode reward: [(0, '5.008')] [2025-02-13 16:09:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 872448. Throughput: 0: 204.2. Samples: 220242. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:09:00,572][00196] Avg episode reward: [(0, '5.009')] [2025-02-13 16:09:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 876544. Throughput: 0: 206.5. Samples: 221716. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:09:05,573][00196] Avg episode reward: [(0, '4.917')] [2025-02-13 16:09:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 880640. Throughput: 0: 207.1. Samples: 222750. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:09:10,576][00196] Avg episode reward: [(0, '4.852')] [2025-02-13 16:09:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 884736. Throughput: 0: 202.7. Samples: 223182. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:09:15,575][00196] Avg episode reward: [(0, '4.872')] [2025-02-13 16:09:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 888832. Throughput: 0: 203.9. Samples: 224662. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:09:20,578][00196] Avg episode reward: [(0, '4.885')] [2025-02-13 16:09:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 892928. Throughput: 0: 206.3. Samples: 225834. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:09:25,572][00196] Avg episode reward: [(0, '4.916')] [2025-02-13 16:09:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 897024. Throughput: 0: 203.3. Samples: 226210. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:09:30,578][00196] Avg episode reward: [(0, '4.903')] [2025-02-13 16:09:32,540][10139] Updated weights for policy 0, policy_version 220 (0.1144) [2025-02-13 16:09:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 901120. Throughput: 0: 199.4. Samples: 227612. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:09:35,573][00196] Avg episode reward: [(0, '4.915')] [2025-02-13 16:09:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 905216. Throughput: 0: 206.5. Samples: 228970. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:09:40,575][00196] Avg episode reward: [(0, '5.016')] [2025-02-13 16:09:45,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 909312. Throughput: 0: 202.9. Samples: 229374. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:09:45,578][00196] Avg episode reward: [(0, '4.932')] [2025-02-13 16:09:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 913408. Throughput: 0: 192.4. Samples: 230374. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:09:50,571][00196] Avg episode reward: [(0, '4.939')] [2025-02-13 16:09:52,391][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000224_917504.pth... [2025-02-13 16:09:52,526][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000177_724992.pth [2025-02-13 16:09:55,570][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 917504. Throughput: 0: 206.8. Samples: 232058. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:09:55,577][00196] Avg episode reward: [(0, '4.985')] [2025-02-13 16:10:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 921600. Throughput: 0: 205.8. Samples: 232442. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:10:00,573][00196] Avg episode reward: [(0, '4.998')] [2025-02-13 16:10:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 925696. Throughput: 0: 190.0. Samples: 233212. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:10:05,572][00196] Avg episode reward: [(0, '4.966')] [2025-02-13 16:10:10,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 929792. Throughput: 0: 201.3. Samples: 234894. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:10:10,578][00196] Avg episode reward: [(0, '5.128')] [2025-02-13 16:10:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 933888. Throughput: 0: 208.6. Samples: 235598. Policy #0 lag: (min: 1.0, avg: 1.7, max: 2.0) [2025-02-13 16:10:15,574][00196] Avg episode reward: [(0, '4.999')] [2025-02-13 16:10:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 937984. Throughput: 0: 198.8. Samples: 236558. Policy #0 lag: (min: 1.0, avg: 1.7, max: 2.0) [2025-02-13 16:10:20,574][00196] Avg episode reward: [(0, '4.977')] [2025-02-13 16:10:22,754][10139] Updated weights for policy 0, policy_version 230 (0.1607) [2025-02-13 16:10:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 942080. Throughput: 0: 204.2. Samples: 238160. Policy #0 lag: (min: 1.0, avg: 1.7, max: 2.0) [2025-02-13 16:10:25,574][00196] Avg episode reward: [(0, '5.088')] [2025-02-13 16:10:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 946176. Throughput: 0: 208.5. Samples: 238754. Policy #0 lag: (min: 1.0, avg: 1.7, max: 2.0) [2025-02-13 16:10:30,575][00196] Avg episode reward: [(0, '4.960')] [2025-02-13 16:10:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 950272. Throughput: 0: 207.5. Samples: 239710. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:10:35,572][00196] Avg episode reward: [(0, '5.066')] [2025-02-13 16:10:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 954368. Throughput: 0: 203.4. Samples: 241212. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:10:40,576][00196] Avg episode reward: [(0, '5.090')] [2025-02-13 16:10:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 958464. Throughput: 0: 210.2. Samples: 241902. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:10:45,573][00196] Avg episode reward: [(0, '5.107')] [2025-02-13 16:10:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 962560. Throughput: 0: 211.6. Samples: 242734. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:10:50,579][00196] Avg episode reward: [(0, '5.141')] [2025-02-13 16:10:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 966656. Throughput: 0: 210.1. Samples: 244350. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:10:55,579][00196] Avg episode reward: [(0, '5.226')] [2025-02-13 16:11:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 970752. Throughput: 0: 205.0. Samples: 244824. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:11:00,571][00196] Avg episode reward: [(0, '5.131')] [2025-02-13 16:11:03,019][10122] Saving new best policy, reward=5.226! [2025-02-13 16:11:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 974848. Throughput: 0: 205.1. Samples: 245788. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:11:05,578][00196] Avg episode reward: [(0, '5.194')] [2025-02-13 16:11:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 978944. Throughput: 0: 206.5. Samples: 247454. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:11:10,574][00196] Avg episode reward: [(0, '5.154')] [2025-02-13 16:11:12,708][10139] Updated weights for policy 0, policy_version 240 (0.1221) [2025-02-13 16:11:15,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 983040. Throughput: 0: 200.1. Samples: 247760. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:11:15,573][00196] Avg episode reward: [(0, '5.229')] [2025-02-13 16:11:18,131][10122] Saving new best policy, reward=5.229! [2025-02-13 16:11:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 987136. Throughput: 0: 199.9. Samples: 248704. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:11:20,572][00196] Avg episode reward: [(0, '5.211')] [2025-02-13 16:11:25,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 991232. Throughput: 0: 204.0. Samples: 250394. Policy #0 lag: (min: 1.0, avg: 1.7, max: 2.0) [2025-02-13 16:11:25,572][00196] Avg episode reward: [(0, '5.260')] [2025-02-13 16:11:27,458][10122] Saving new best policy, reward=5.260! [2025-02-13 16:11:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 995328. Throughput: 0: 199.3. Samples: 250872. Policy #0 lag: (min: 1.0, avg: 1.7, max: 2.0) [2025-02-13 16:11:30,582][00196] Avg episode reward: [(0, '5.321')] [2025-02-13 16:11:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 999424. Throughput: 0: 200.8. Samples: 251768. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:11:35,572][00196] Avg episode reward: [(0, '5.338')] [2025-02-13 16:11:38,069][10122] Saving new best policy, reward=5.321! [2025-02-13 16:11:38,221][10122] Saving new best policy, reward=5.338! [2025-02-13 16:11:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1003520. Throughput: 0: 202.4. Samples: 253458. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:11:40,573][00196] Avg episode reward: [(0, '5.426')] [2025-02-13 16:11:42,661][10122] Saving new best policy, reward=5.426! [2025-02-13 16:11:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1007616. Throughput: 0: 200.4. Samples: 253842. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:11:45,577][00196] Avg episode reward: [(0, '5.407')] [2025-02-13 16:11:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1011712. Throughput: 0: 202.1. Samples: 254884. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:11:50,579][00196] Avg episode reward: [(0, '5.441')] [2025-02-13 16:11:52,987][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000248_1015808.pth... [2025-02-13 16:11:53,101][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000201_823296.pth [2025-02-13 16:11:53,113][10122] Saving new best policy, reward=5.441! [2025-02-13 16:11:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1015808. Throughput: 0: 202.1. Samples: 256550. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:11:55,577][00196] Avg episode reward: [(0, '5.399')] [2025-02-13 16:12:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1019904. Throughput: 0: 199.5. Samples: 256738. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:12:00,577][00196] Avg episode reward: [(0, '5.248')] [2025-02-13 16:12:03,178][10139] Updated weights for policy 0, policy_version 250 (0.0578) [2025-02-13 16:12:05,588][00196] Fps is (10 sec: 817.7, 60 sec: 818.9, 300 sec: 805.3). Total num frames: 1024000. Throughput: 0: 203.2. Samples: 257852. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:12:05,599][00196] Avg episode reward: [(0, '5.360')] [2025-02-13 16:12:06,774][10122] Signal inference workers to stop experience collection... (250 times) [2025-02-13 16:12:06,837][10139] InferenceWorker_p0-w0: stopping experience collection (250 times) [2025-02-13 16:12:08,236][10122] Signal inference workers to resume experience collection... (250 times) [2025-02-13 16:12:08,238][10139] InferenceWorker_p0-w0: resuming experience collection (250 times) [2025-02-13 16:12:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1028096. Throughput: 0: 201.8. Samples: 259476. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:12:10,578][00196] Avg episode reward: [(0, '5.343')] [2025-02-13 16:12:15,569][00196] Fps is (10 sec: 820.7, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1032192. Throughput: 0: 199.6. Samples: 259854. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:12:15,576][00196] Avg episode reward: [(0, '5.439')] [2025-02-13 16:12:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1036288. Throughput: 0: 208.6. Samples: 261156. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:12:20,572][00196] Avg episode reward: [(0, '5.358')] [2025-02-13 16:12:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1040384. Throughput: 0: 204.5. Samples: 262660. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:12:25,577][00196] Avg episode reward: [(0, '5.307')] [2025-02-13 16:12:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1044480. Throughput: 0: 207.9. Samples: 263196. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:12:30,576][00196] Avg episode reward: [(0, '5.541')] [2025-02-13 16:12:35,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1048576. Throughput: 0: 208.3. Samples: 264260. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:12:35,575][00196] Avg episode reward: [(0, '5.542')] [2025-02-13 16:12:37,640][10122] Saving new best policy, reward=5.541! [2025-02-13 16:12:40,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1052672. Throughput: 0: 204.4. Samples: 265748. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:12:40,579][00196] Avg episode reward: [(0, '5.693')] [2025-02-13 16:12:45,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1056768. Throughput: 0: 207.6. Samples: 266082. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:12:45,579][00196] Avg episode reward: [(0, '5.697')] [2025-02-13 16:12:46,395][10122] Saving new best policy, reward=5.693! [2025-02-13 16:12:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1060864. Throughput: 0: 216.2. Samples: 267576. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:12:50,575][00196] Avg episode reward: [(0, '6.013')] [2025-02-13 16:12:52,468][10122] Saving new best policy, reward=5.697! [2025-02-13 16:12:52,483][10139] Updated weights for policy 0, policy_version 260 (0.1130) [2025-02-13 16:12:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1064960. Throughput: 0: 208.2. Samples: 268846. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:12:55,582][00196] Avg episode reward: [(0, '6.112')] [2025-02-13 16:12:56,877][10122] Saving new best policy, reward=6.013! [2025-02-13 16:13:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1069056. Throughput: 0: 213.8. Samples: 269474. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:13:00,574][00196] Avg episode reward: [(0, '6.056')] [2025-02-13 16:13:01,628][10122] Saving new best policy, reward=6.112! [2025-02-13 16:13:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.4, 300 sec: 819.2). Total num frames: 1073152. Throughput: 0: 209.9. Samples: 270602. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:13:05,577][00196] Avg episode reward: [(0, '6.001')] [2025-02-13 16:13:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1077248. Throughput: 0: 201.4. Samples: 271722. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:13:10,578][00196] Avg episode reward: [(0, '5.876')] [2025-02-13 16:13:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1081344. Throughput: 0: 200.9. Samples: 272238. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:13:15,575][00196] Avg episode reward: [(0, '5.912')] [2025-02-13 16:13:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1085440. Throughput: 0: 203.9. Samples: 273434. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:13:20,576][00196] Avg episode reward: [(0, '5.827')] [2025-02-13 16:13:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1089536. Throughput: 0: 195.5. Samples: 274544. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:13:25,572][00196] Avg episode reward: [(0, '5.615')] [2025-02-13 16:13:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1093632. Throughput: 0: 200.9. Samples: 275122. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:13:30,575][00196] Avg episode reward: [(0, '5.643')] [2025-02-13 16:13:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1097728. Throughput: 0: 202.0. Samples: 276668. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:13:35,575][00196] Avg episode reward: [(0, '5.663')] [2025-02-13 16:13:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1101824. Throughput: 0: 193.8. Samples: 277566. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:13:40,572][00196] Avg episode reward: [(0, '5.682')] [2025-02-13 16:13:43,320][10139] Updated weights for policy 0, policy_version 270 (0.2275) [2025-02-13 16:13:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1105920. Throughput: 0: 189.6. Samples: 278008. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:13:45,572][00196] Avg episode reward: [(0, '5.555')] [2025-02-13 16:13:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1110016. Throughput: 0: 196.0. Samples: 279422. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:13:50,579][00196] Avg episode reward: [(0, '5.593')] [2025-02-13 16:13:54,059][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000272_1114112.pth... [2025-02-13 16:13:54,247][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000224_917504.pth [2025-02-13 16:13:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1114112. Throughput: 0: 195.3. Samples: 280512. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:13:55,577][00196] Avg episode reward: [(0, '5.659')] [2025-02-13 16:14:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1118208. Throughput: 0: 195.2. Samples: 281024. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:14:00,573][00196] Avg episode reward: [(0, '5.683')] [2025-02-13 16:14:05,578][00196] Fps is (10 sec: 818.5, 60 sec: 819.1, 300 sec: 819.2). Total num frames: 1122304. Throughput: 0: 199.5. Samples: 282414. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:14:05,581][00196] Avg episode reward: [(0, '5.778')] [2025-02-13 16:14:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1126400. Throughput: 0: 197.2. Samples: 283416. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:14:10,580][00196] Avg episode reward: [(0, '5.863')] [2025-02-13 16:14:15,569][00196] Fps is (10 sec: 819.9, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1130496. Throughput: 0: 198.9. Samples: 284072. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:14:15,572][00196] Avg episode reward: [(0, '5.809')] [2025-02-13 16:14:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1134592. Throughput: 0: 202.8. Samples: 285794. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:14:20,573][00196] Avg episode reward: [(0, '6.115')] [2025-02-13 16:14:22,901][10122] Saving new best policy, reward=6.115! [2025-02-13 16:14:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1138688. Throughput: 0: 204.0. Samples: 286746. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:14:25,579][00196] Avg episode reward: [(0, '6.244')] [2025-02-13 16:14:28,190][10122] Saving new best policy, reward=6.244! [2025-02-13 16:14:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1142784. Throughput: 0: 206.2. Samples: 287288. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:14:30,573][00196] Avg episode reward: [(0, '6.264')] [2025-02-13 16:14:34,846][10122] Saving new best policy, reward=6.264! [2025-02-13 16:14:34,904][10139] Updated weights for policy 0, policy_version 280 (0.1775) [2025-02-13 16:14:35,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 1146880. Throughput: 0: 195.6. Samples: 288226. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:14:35,578][00196] Avg episode reward: [(0, '6.284')] [2025-02-13 16:14:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1146880. Throughput: 0: 185.0. Samples: 288836. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:14:40,576][00196] Avg episode reward: [(0, '6.245')] [2025-02-13 16:14:44,874][10122] Saving new best policy, reward=6.284! [2025-02-13 16:14:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1150976. Throughput: 0: 184.5. Samples: 289328. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:14:45,572][00196] Avg episode reward: [(0, '6.097')] [2025-02-13 16:14:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1155072. Throughput: 0: 176.3. Samples: 290346. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:14:50,573][00196] Avg episode reward: [(0, '6.059')] [2025-02-13 16:14:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1159168. Throughput: 0: 184.5. Samples: 291720. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:14:55,577][00196] Avg episode reward: [(0, '5.964')] [2025-02-13 16:15:00,575][00196] Fps is (10 sec: 818.7, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1163264. Throughput: 0: 183.0. Samples: 292308. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:15:00,584][00196] Avg episode reward: [(0, '6.239')] [2025-02-13 16:15:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 805.3). Total num frames: 1167360. Throughput: 0: 171.5. Samples: 293510. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:15:05,575][00196] Avg episode reward: [(0, '6.217')] [2025-02-13 16:15:10,569][00196] Fps is (10 sec: 819.7, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1171456. Throughput: 0: 181.4. Samples: 294910. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:15:10,578][00196] Avg episode reward: [(0, '6.429')] [2025-02-13 16:15:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 1171456. Throughput: 0: 182.1. Samples: 295482. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:15:15,572][00196] Avg episode reward: [(0, '6.377')] [2025-02-13 16:15:15,735][10122] Saving new best policy, reward=6.429! [2025-02-13 16:15:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1179648. Throughput: 0: 186.0. Samples: 296598. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:15:20,577][00196] Avg episode reward: [(0, '6.627')] [2025-02-13 16:15:24,148][10122] Saving new best policy, reward=6.627! [2025-02-13 16:15:25,570][00196] Fps is (10 sec: 1228.8, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1183744. Throughput: 0: 205.1. Samples: 298064. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:15:25,578][00196] Avg episode reward: [(0, '6.382')] [2025-02-13 16:15:30,442][10139] Updated weights for policy 0, policy_version 290 (0.0060) [2025-02-13 16:15:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1187840. Throughput: 0: 206.8. Samples: 298634. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:15:30,578][00196] Avg episode reward: [(0, '6.377')] [2025-02-13 16:15:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1191936. Throughput: 0: 207.1. Samples: 299664. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:15:35,571][00196] Avg episode reward: [(0, '6.419')] [2025-02-13 16:15:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1196032. Throughput: 0: 207.2. Samples: 301044. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:15:40,574][00196] Avg episode reward: [(0, '6.563')] [2025-02-13 16:15:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1200128. Throughput: 0: 209.0. Samples: 301712. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:15:45,572][00196] Avg episode reward: [(0, '6.736')] [2025-02-13 16:15:49,307][10122] Saving new best policy, reward=6.736! [2025-02-13 16:15:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1204224. Throughput: 0: 206.4. Samples: 302798. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:15:50,572][00196] Avg episode reward: [(0, '6.637')] [2025-02-13 16:15:53,690][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000295_1208320.pth... [2025-02-13 16:15:53,819][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000248_1015808.pth [2025-02-13 16:15:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1208320. Throughput: 0: 211.2. Samples: 304414. Policy #0 lag: (min: 1.0, avg: 1.7, max: 3.0) [2025-02-13 16:15:55,572][00196] Avg episode reward: [(0, '6.456')] [2025-02-13 16:16:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 805.3). Total num frames: 1212416. Throughput: 0: 207.0. Samples: 304796. Policy #0 lag: (min: 1.0, avg: 1.7, max: 3.0) [2025-02-13 16:16:00,576][00196] Avg episode reward: [(0, '6.534')] [2025-02-13 16:16:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1216512. Throughput: 0: 205.7. Samples: 305854. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:16:05,577][00196] Avg episode reward: [(0, '6.434')] [2025-02-13 16:16:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1220608. Throughput: 0: 209.4. Samples: 307488. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:16:10,572][00196] Avg episode reward: [(0, '6.409')] [2025-02-13 16:16:15,570][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 805.3). Total num frames: 1224704. Throughput: 0: 206.7. Samples: 307934. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:16:15,577][00196] Avg episode reward: [(0, '6.379')] [2025-02-13 16:16:19,243][10139] Updated weights for policy 0, policy_version 300 (0.0476) [2025-02-13 16:16:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1228800. Throughput: 0: 206.5. Samples: 308956. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:16:20,576][00196] Avg episode reward: [(0, '6.368')] [2025-02-13 16:16:21,808][10122] Signal inference workers to stop experience collection... (300 times) [2025-02-13 16:16:21,894][10139] InferenceWorker_p0-w0: stopping experience collection (300 times) [2025-02-13 16:16:23,607][10122] Signal inference workers to resume experience collection... (300 times) [2025-02-13 16:16:23,612][10139] InferenceWorker_p0-w0: resuming experience collection (300 times) [2025-02-13 16:16:25,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1232896. Throughput: 0: 208.8. Samples: 310440. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:16:25,577][00196] Avg episode reward: [(0, '6.112')] [2025-02-13 16:16:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1236992. Throughput: 0: 209.3. Samples: 311130. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:16:30,574][00196] Avg episode reward: [(0, '6.222')] [2025-02-13 16:16:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1241088. Throughput: 0: 203.8. Samples: 311968. Policy #0 lag: (min: 1.0, avg: 1.7, max: 2.0) [2025-02-13 16:16:35,572][00196] Avg episode reward: [(0, '6.360')] [2025-02-13 16:16:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1245184. Throughput: 0: 205.2. Samples: 313650. Policy #0 lag: (min: 1.0, avg: 1.7, max: 2.0) [2025-02-13 16:16:40,580][00196] Avg episode reward: [(0, '6.394')] [2025-02-13 16:16:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1249280. Throughput: 0: 207.5. Samples: 314132. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:16:45,578][00196] Avg episode reward: [(0, '6.478')] [2025-02-13 16:16:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1253376. Throughput: 0: 206.0. Samples: 315126. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:16:50,572][00196] Avg episode reward: [(0, '6.545')] [2025-02-13 16:16:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1257472. Throughput: 0: 207.9. Samples: 316844. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:16:55,574][00196] Avg episode reward: [(0, '6.680')] [2025-02-13 16:17:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.4). Total num frames: 1261568. Throughput: 0: 211.2. Samples: 317440. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:17:00,572][00196] Avg episode reward: [(0, '6.984')] [2025-02-13 16:17:02,805][10122] Saving new best policy, reward=6.984! [2025-02-13 16:17:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1265664. Throughput: 0: 209.5. Samples: 318384. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:17:05,571][00196] Avg episode reward: [(0, '7.278')] [2025-02-13 16:17:07,787][10122] Saving new best policy, reward=7.278! [2025-02-13 16:17:07,794][10139] Updated weights for policy 0, policy_version 310 (0.1012) [2025-02-13 16:17:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1269760. Throughput: 0: 212.0. Samples: 319982. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:17:10,579][00196] Avg episode reward: [(0, '7.576')] [2025-02-13 16:17:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1273856. Throughput: 0: 206.1. Samples: 320404. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:17:15,578][00196] Avg episode reward: [(0, '7.471')] [2025-02-13 16:17:18,096][10122] Saving new best policy, reward=7.576! [2025-02-13 16:17:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1277952. Throughput: 0: 210.2. Samples: 321426. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:17:20,573][00196] Avg episode reward: [(0, '7.376')] [2025-02-13 16:17:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1282048. Throughput: 0: 202.9. Samples: 322782. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:17:25,572][00196] Avg episode reward: [(0, '7.386')] [2025-02-13 16:17:30,573][00196] Fps is (10 sec: 818.9, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1286144. Throughput: 0: 202.8. Samples: 323260. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:17:30,581][00196] Avg episode reward: [(0, '7.394')] [2025-02-13 16:17:35,575][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 1290240. Throughput: 0: 203.1. Samples: 324266. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:17:35,581][00196] Avg episode reward: [(0, '7.467')] [2025-02-13 16:17:40,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1294336. Throughput: 0: 196.8. Samples: 325700. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:17:40,578][00196] Avg episode reward: [(0, '7.397')] [2025-02-13 16:17:45,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1298432. Throughput: 0: 195.3. Samples: 326228. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:17:45,576][00196] Avg episode reward: [(0, '7.261')] [2025-02-13 16:17:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1302528. Throughput: 0: 197.1. Samples: 327254. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:17:50,573][00196] Avg episode reward: [(0, '7.303')] [2025-02-13 16:17:54,630][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000319_1306624.pth... [2025-02-13 16:17:54,748][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000272_1114112.pth [2025-02-13 16:17:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1306624. Throughput: 0: 190.6. Samples: 328558. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:17:55,576][00196] Avg episode reward: [(0, '7.543')] [2025-02-13 16:17:58,930][10139] Updated weights for policy 0, policy_version 320 (0.0672) [2025-02-13 16:18:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1310720. Throughput: 0: 197.0. Samples: 329268. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:18:00,575][00196] Avg episode reward: [(0, '7.365')] [2025-02-13 16:18:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1314816. Throughput: 0: 198.0. Samples: 330336. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:18:05,574][00196] Avg episode reward: [(0, '7.323')] [2025-02-13 16:18:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1318912. Throughput: 0: 194.8. Samples: 331550. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:18:10,572][00196] Avg episode reward: [(0, '6.944')] [2025-02-13 16:18:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1323008. Throughput: 0: 201.8. Samples: 332340. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:18:15,572][00196] Avg episode reward: [(0, '6.838')] [2025-02-13 16:18:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1327104. Throughput: 0: 202.7. Samples: 333388. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:18:20,572][00196] Avg episode reward: [(0, '7.004')] [2025-02-13 16:18:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1331200. Throughput: 0: 199.0. Samples: 334656. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:18:25,573][00196] Avg episode reward: [(0, '7.034')] [2025-02-13 16:18:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1335296. Throughput: 0: 203.3. Samples: 335376. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:18:30,572][00196] Avg episode reward: [(0, '7.467')] [2025-02-13 16:18:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 805.3). Total num frames: 1339392. Throughput: 0: 202.9. Samples: 336384. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:18:35,574][00196] Avg episode reward: [(0, '7.651')] [2025-02-13 16:18:40,377][10122] Saving new best policy, reward=7.651! [2025-02-13 16:18:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1343488. Throughput: 0: 197.0. Samples: 337422. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:18:40,572][00196] Avg episode reward: [(0, '7.664')] [2025-02-13 16:18:45,320][10122] Saving new best policy, reward=7.664! [2025-02-13 16:18:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1347584. Throughput: 0: 204.1. Samples: 338454. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:18:45,574][00196] Avg episode reward: [(0, '7.570')] [2025-02-13 16:18:50,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1347584. Throughput: 0: 204.1. Samples: 339520. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:18:50,573][00196] Avg episode reward: [(0, '7.625')] [2025-02-13 16:18:51,938][10139] Updated weights for policy 0, policy_version 330 (0.1459) [2025-02-13 16:18:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1351680. Throughput: 0: 200.0. Samples: 340552. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:18:55,576][00196] Avg episode reward: [(0, '7.756')] [2025-02-13 16:19:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.5). Total num frames: 1355776. Throughput: 0: 194.8. Samples: 341108. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:19:00,578][00196] Avg episode reward: [(0, '7.641')] [2025-02-13 16:19:01,705][10122] Saving new best policy, reward=7.756! [2025-02-13 16:19:05,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1359872. Throughput: 0: 195.5. Samples: 342186. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:19:05,578][00196] Avg episode reward: [(0, '7.682')] [2025-02-13 16:19:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1363968. Throughput: 0: 190.6. Samples: 343234. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:19:10,572][00196] Avg episode reward: [(0, '7.624')] [2025-02-13 16:19:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1368064. Throughput: 0: 185.5. Samples: 343724. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:19:15,583][00196] Avg episode reward: [(0, '7.665')] [2025-02-13 16:19:20,570][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1372160. Throughput: 0: 194.4. Samples: 345132. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:19:20,576][00196] Avg episode reward: [(0, '7.732')] [2025-02-13 16:19:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1376256. Throughput: 0: 190.7. Samples: 346004. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:19:25,573][00196] Avg episode reward: [(0, '7.873')] [2025-02-13 16:19:28,697][10122] Saving new best policy, reward=7.873! [2025-02-13 16:19:30,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1380352. Throughput: 0: 184.1. Samples: 346738. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:19:30,573][00196] Avg episode reward: [(0, '8.098')] [2025-02-13 16:19:33,668][10122] Saving new best policy, reward=8.098! [2025-02-13 16:19:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1384448. Throughput: 0: 186.8. Samples: 347924. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:19:35,578][00196] Avg episode reward: [(0, '8.199')] [2025-02-13 16:19:40,462][10122] Saving new best policy, reward=8.199! [2025-02-13 16:19:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1388544. Throughput: 0: 182.4. Samples: 348760. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:19:40,578][00196] Avg episode reward: [(0, '8.321')] [2025-02-13 16:19:45,395][10122] Saving new best policy, reward=8.321! [2025-02-13 16:19:45,402][10139] Updated weights for policy 0, policy_version 340 (0.0560) [2025-02-13 16:19:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1392640. Throughput: 0: 192.5. Samples: 349770. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:19:45,572][00196] Avg episode reward: [(0, '8.252')] [2025-02-13 16:19:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1396736. Throughput: 0: 191.6. Samples: 350808. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:19:50,573][00196] Avg episode reward: [(0, '8.085')] [2025-02-13 16:19:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1396736. Throughput: 0: 191.3. Samples: 351842. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:19:55,580][00196] Avg episode reward: [(0, '8.149')] [2025-02-13 16:19:56,749][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000342_1400832.pth... [2025-02-13 16:19:56,866][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000295_1208320.pth [2025-02-13 16:20:00,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1400832. Throughput: 0: 194.5. Samples: 352476. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:20:00,576][00196] Avg episode reward: [(0, '8.382')] [2025-02-13 16:20:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1404928. Throughput: 0: 193.1. Samples: 353822. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:20:05,581][00196] Avg episode reward: [(0, '8.073')] [2025-02-13 16:20:06,552][10122] Saving new best policy, reward=8.382! [2025-02-13 16:20:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1409024. Throughput: 0: 193.9. Samples: 354728. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:20:10,574][00196] Avg episode reward: [(0, '8.099')] [2025-02-13 16:20:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1413120. Throughput: 0: 188.1. Samples: 355202. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:20:15,578][00196] Avg episode reward: [(0, '7.985')] [2025-02-13 16:20:20,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1417216. Throughput: 0: 198.2. Samples: 356842. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:20:20,580][00196] Avg episode reward: [(0, '7.998')] [2025-02-13 16:20:25,571][00196] Fps is (10 sec: 819.0, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1421312. Throughput: 0: 201.4. Samples: 357824. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:20:25,575][00196] Avg episode reward: [(0, '8.233')] [2025-02-13 16:20:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1425408. Throughput: 0: 193.7. Samples: 358486. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:20:30,579][00196] Avg episode reward: [(0, '8.310')] [2025-02-13 16:20:35,443][10139] Updated weights for policy 0, policy_version 350 (0.0070) [2025-02-13 16:20:35,569][00196] Fps is (10 sec: 1229.0, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1433600. Throughput: 0: 202.0. Samples: 359898. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:20:35,573][00196] Avg episode reward: [(0, '8.254')] [2025-02-13 16:20:39,098][10122] Signal inference workers to stop experience collection... (350 times) [2025-02-13 16:20:39,219][10139] InferenceWorker_p0-w0: stopping experience collection (350 times) [2025-02-13 16:20:40,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1433600. Throughput: 0: 202.4. Samples: 360948. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:20:40,575][00196] Avg episode reward: [(0, '8.045')] [2025-02-13 16:20:41,409][10122] Signal inference workers to resume experience collection... (350 times) [2025-02-13 16:20:41,410][10139] InferenceWorker_p0-w0: resuming experience collection (350 times) [2025-02-13 16:20:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1437696. Throughput: 0: 199.5. Samples: 361452. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:20:45,575][00196] Avg episode reward: [(0, '7.773')] [2025-02-13 16:20:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1441792. Throughput: 0: 204.8. Samples: 363038. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:20:50,573][00196] Avg episode reward: [(0, '7.844')] [2025-02-13 16:20:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1445888. Throughput: 0: 206.8. Samples: 364034. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:20:55,576][00196] Avg episode reward: [(0, '7.938')] [2025-02-13 16:21:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1449984. Throughput: 0: 208.8. Samples: 364598. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:21:00,575][00196] Avg episode reward: [(0, '7.972')] [2025-02-13 16:21:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1454080. Throughput: 0: 202.1. Samples: 365938. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:21:05,577][00196] Avg episode reward: [(0, '8.215')] [2025-02-13 16:21:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1458176. Throughput: 0: 208.3. Samples: 367198. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:21:10,573][00196] Avg episode reward: [(0, '8.149')] [2025-02-13 16:21:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1462272. Throughput: 0: 201.8. Samples: 367568. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:21:15,582][00196] Avg episode reward: [(0, '8.273')] [2025-02-13 16:21:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1466368. Throughput: 0: 204.3. Samples: 369090. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:21:20,580][00196] Avg episode reward: [(0, '8.456')] [2025-02-13 16:21:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1470464. Throughput: 0: 205.3. Samples: 370188. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:21:25,574][00196] Avg episode reward: [(0, '8.296')] [2025-02-13 16:21:27,768][10122] Saving new best policy, reward=8.456! [2025-02-13 16:21:27,766][10139] Updated weights for policy 0, policy_version 360 (0.0757) [2025-02-13 16:21:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1474560. Throughput: 0: 198.9. Samples: 370404. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:21:30,572][00196] Avg episode reward: [(0, '8.443')] [2025-02-13 16:21:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1478656. Throughput: 0: 195.2. Samples: 371822. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:21:35,576][00196] Avg episode reward: [(0, '8.391')] [2025-02-13 16:21:40,574][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 791.4). Total num frames: 1482752. Throughput: 0: 199.4. Samples: 373006. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:21:40,578][00196] Avg episode reward: [(0, '8.229')] [2025-02-13 16:21:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1486848. Throughput: 0: 195.8. Samples: 373410. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:21:45,573][00196] Avg episode reward: [(0, '8.170')] [2025-02-13 16:21:50,570][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1490944. Throughput: 0: 202.7. Samples: 375058. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:21:50,581][00196] Avg episode reward: [(0, '8.091')] [2025-02-13 16:21:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1495040. Throughput: 0: 202.8. Samples: 376322. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:21:55,576][00196] Avg episode reward: [(0, '8.427')] [2025-02-13 16:21:57,817][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000366_1499136.pth... [2025-02-13 16:21:57,967][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000319_1306624.pth [2025-02-13 16:22:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1499136. Throughput: 0: 197.7. Samples: 376464. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:22:00,572][00196] Avg episode reward: [(0, '8.560')] [2025-02-13 16:22:02,830][10122] Saving new best policy, reward=8.560! [2025-02-13 16:22:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1503232. Throughput: 0: 197.3. Samples: 377968. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:22:05,579][00196] Avg episode reward: [(0, '8.579')] [2025-02-13 16:22:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1507328. Throughput: 0: 202.1. Samples: 379282. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:22:10,572][00196] Avg episode reward: [(0, '8.617')] [2025-02-13 16:22:12,923][10122] Saving new best policy, reward=8.579! [2025-02-13 16:22:13,103][10122] Saving new best policy, reward=8.617! [2025-02-13 16:22:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1511424. Throughput: 0: 203.4. Samples: 379556. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:22:15,571][00196] Avg episode reward: [(0, '8.757')] [2025-02-13 16:22:17,567][10122] Saving new best policy, reward=8.757! [2025-02-13 16:22:17,573][10139] Updated weights for policy 0, policy_version 370 (0.1081) [2025-02-13 16:22:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1515520. Throughput: 0: 205.1. Samples: 381052. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:22:20,579][00196] Avg episode reward: [(0, '8.750')] [2025-02-13 16:22:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1519616. Throughput: 0: 208.4. Samples: 382382. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:22:25,578][00196] Avg episode reward: [(0, '8.631')] [2025-02-13 16:22:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1523712. Throughput: 0: 202.8. Samples: 382534. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:22:30,573][00196] Avg episode reward: [(0, '8.581')] [2025-02-13 16:22:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1527808. Throughput: 0: 198.6. Samples: 383994. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:22:35,579][00196] Avg episode reward: [(0, '8.674')] [2025-02-13 16:22:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 791.4). Total num frames: 1531904. Throughput: 0: 203.2. Samples: 385466. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:22:40,573][00196] Avg episode reward: [(0, '8.699')] [2025-02-13 16:22:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1536000. Throughput: 0: 205.0. Samples: 385688. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:22:45,578][00196] Avg episode reward: [(0, '8.974')] [2025-02-13 16:22:47,106][10122] Saving new best policy, reward=8.974! [2025-02-13 16:22:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1540096. Throughput: 0: 207.5. Samples: 387304. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:22:50,573][00196] Avg episode reward: [(0, '8.966')] [2025-02-13 16:22:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1544192. Throughput: 0: 204.6. Samples: 388490. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:22:55,573][00196] Avg episode reward: [(0, '8.782')] [2025-02-13 16:23:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1548288. Throughput: 0: 208.8. Samples: 388952. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:23:00,571][00196] Avg episode reward: [(0, '8.924')] [2025-02-13 16:23:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1552384. Throughput: 0: 205.3. Samples: 390292. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:23:05,583][00196] Avg episode reward: [(0, '8.647')] [2025-02-13 16:23:06,309][10139] Updated weights for policy 0, policy_version 380 (0.0476) [2025-02-13 16:23:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1556480. Throughput: 0: 205.4. Samples: 391624. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:23:10,573][00196] Avg episode reward: [(0, '8.432')] [2025-02-13 16:23:15,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1560576. Throughput: 0: 213.1. Samples: 392122. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:23:15,578][00196] Avg episode reward: [(0, '8.549')] [2025-02-13 16:23:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1564672. Throughput: 0: 210.3. Samples: 393456. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:23:20,574][00196] Avg episode reward: [(0, '8.800')] [2025-02-13 16:23:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1568768. Throughput: 0: 206.4. Samples: 394756. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:23:25,579][00196] Avg episode reward: [(0, '8.732')] [2025-02-13 16:23:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1572864. Throughput: 0: 208.7. Samples: 395080. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:23:30,572][00196] Avg episode reward: [(0, '8.667')] [2025-02-13 16:23:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1576960. Throughput: 0: 201.6. Samples: 396378. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:23:35,580][00196] Avg episode reward: [(0, '8.876')] [2025-02-13 16:23:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1581056. Throughput: 0: 208.2. Samples: 397858. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:23:40,581][00196] Avg episode reward: [(0, '8.974')] [2025-02-13 16:23:45,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1585152. Throughput: 0: 205.3. Samples: 398190. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:23:45,576][00196] Avg episode reward: [(0, '9.200')] [2025-02-13 16:23:47,941][10122] Saving new best policy, reward=9.200! [2025-02-13 16:23:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1589248. Throughput: 0: 202.7. Samples: 399412. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:23:50,576][00196] Avg episode reward: [(0, '9.079')] [2025-02-13 16:23:52,510][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000389_1593344.pth... [2025-02-13 16:23:52,626][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000342_1400832.pth [2025-02-13 16:23:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1593344. Throughput: 0: 207.3. Samples: 400952. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:23:55,572][00196] Avg episode reward: [(0, '9.079')] [2025-02-13 16:23:56,803][10139] Updated weights for policy 0, policy_version 390 (0.1673) [2025-02-13 16:24:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1597440. Throughput: 0: 209.6. Samples: 401556. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:24:00,573][00196] Avg episode reward: [(0, '9.062')] [2025-02-13 16:24:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1601536. Throughput: 0: 195.7. Samples: 402262. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:24:05,576][00196] Avg episode reward: [(0, '9.044')] [2025-02-13 16:24:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1605632. Throughput: 0: 202.0. Samples: 403844. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:24:10,573][00196] Avg episode reward: [(0, '8.906')] [2025-02-13 16:24:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1609728. Throughput: 0: 205.7. Samples: 404338. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:24:15,580][00196] Avg episode reward: [(0, '9.128')] [2025-02-13 16:24:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1613824. Throughput: 0: 199.4. Samples: 405352. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:24:20,572][00196] Avg episode reward: [(0, '9.246')] [2025-02-13 16:24:22,726][10122] Saving new best policy, reward=9.246! [2025-02-13 16:24:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1617920. Throughput: 0: 203.9. Samples: 407034. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:24:25,572][00196] Avg episode reward: [(0, '9.350')] [2025-02-13 16:24:27,365][10122] Saving new best policy, reward=9.350! [2025-02-13 16:24:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1622016. Throughput: 0: 206.4. Samples: 407478. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:24:30,577][00196] Avg episode reward: [(0, '8.939')] [2025-02-13 16:24:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1626112. Throughput: 0: 197.1. Samples: 408282. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:24:35,574][00196] Avg episode reward: [(0, '9.001')] [2025-02-13 16:24:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 1630208. Throughput: 0: 195.0. Samples: 409726. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:24:40,573][00196] Avg episode reward: [(0, '9.148')] [2025-02-13 16:24:45,570][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1630208. Throughput: 0: 193.5. Samples: 410264. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:24:45,580][00196] Avg episode reward: [(0, '9.249')] [2025-02-13 16:24:50,572][00196] Fps is (10 sec: 409.5, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 1634304. Throughput: 0: 179.3. Samples: 410330. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:24:50,586][00196] Avg episode reward: [(0, '9.249')] [2025-02-13 16:24:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 1634304. Throughput: 0: 164.9. Samples: 411264. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:24:55,575][00196] Avg episode reward: [(0, '9.406')] [2025-02-13 16:24:56,057][10139] Updated weights for policy 0, policy_version 400 (0.3822) [2025-02-13 16:24:59,037][10122] Signal inference workers to stop experience collection... (400 times) [2025-02-13 16:24:59,093][10139] InferenceWorker_p0-w0: stopping experience collection (400 times) [2025-02-13 16:25:00,569][00196] Fps is (10 sec: 409.7, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 1638400. Throughput: 0: 171.2. Samples: 412042. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:25:00,572][00196] Avg episode reward: [(0, '9.406')] [2025-02-13 16:25:00,841][10122] Signal inference workers to resume experience collection... (400 times) [2025-02-13 16:25:00,843][10139] InferenceWorker_p0-w0: resuming experience collection (400 times) [2025-02-13 16:25:00,842][10122] Saving new best policy, reward=9.406! [2025-02-13 16:25:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 1642496. Throughput: 0: 177.5. Samples: 413338. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:25:05,572][00196] Avg episode reward: [(0, '9.596')] [2025-02-13 16:25:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 1646592. Throughput: 0: 159.1. Samples: 414192. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:25:10,583][00196] Avg episode reward: [(0, '9.404')] [2025-02-13 16:25:12,328][10122] Saving new best policy, reward=9.596! [2025-02-13 16:25:15,570][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 1650688. Throughput: 0: 160.8. Samples: 414714. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:25:15,573][00196] Avg episode reward: [(0, '9.374')] [2025-02-13 16:25:20,570][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 1654784. Throughput: 0: 178.0. Samples: 416294. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:25:20,580][00196] Avg episode reward: [(0, '9.272')] [2025-02-13 16:25:25,570][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 1658880. Throughput: 0: 164.7. Samples: 417136. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:25:25,582][00196] Avg episode reward: [(0, '9.638')] [2025-02-13 16:25:27,944][10122] Saving new best policy, reward=9.638! [2025-02-13 16:25:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 777.5). Total num frames: 1662976. Throughput: 0: 163.3. Samples: 417612. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:25:30,572][00196] Avg episode reward: [(0, '9.933')] [2025-02-13 16:25:32,748][10122] Saving new best policy, reward=9.933! [2025-02-13 16:25:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 1667072. Throughput: 0: 193.5. Samples: 419036. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:25:35,577][00196] Avg episode reward: [(0, '10.233')] [2025-02-13 16:25:40,573][00196] Fps is (10 sec: 818.9, 60 sec: 682.6, 300 sec: 791.4). Total num frames: 1671168. Throughput: 0: 192.4. Samples: 419922. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:25:40,578][00196] Avg episode reward: [(0, '10.112')] [2025-02-13 16:25:43,725][10122] Saving new best policy, reward=10.233! [2025-02-13 16:25:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 1675264. Throughput: 0: 187.1. Samples: 420460. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:25:45,573][00196] Avg episode reward: [(0, '10.444')] [2025-02-13 16:25:48,378][10122] Saving new best policy, reward=10.444! [2025-02-13 16:25:48,391][10139] Updated weights for policy 0, policy_version 410 (0.0677) [2025-02-13 16:25:50,569][00196] Fps is (10 sec: 819.5, 60 sec: 751.0, 300 sec: 791.4). Total num frames: 1679360. Throughput: 0: 188.2. Samples: 421808. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:25:50,579][00196] Avg episode reward: [(0, '10.615')] [2025-02-13 16:25:53,469][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000411_1683456.pth... [2025-02-13 16:25:53,558][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000366_1499136.pth [2025-02-13 16:25:53,587][10122] Saving new best policy, reward=10.615! [2025-02-13 16:25:55,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1683456. Throughput: 0: 191.5. Samples: 422808. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:25:55,575][00196] Avg episode reward: [(0, '10.600')] [2025-02-13 16:26:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1687552. Throughput: 0: 194.2. Samples: 423452. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:26:00,576][00196] Avg episode reward: [(0, '10.643')] [2025-02-13 16:26:04,580][10122] Saving new best policy, reward=10.643! [2025-02-13 16:26:05,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 1691648. Throughput: 0: 180.9. Samples: 424436. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:26:05,580][00196] Avg episode reward: [(0, '10.689')] [2025-02-13 16:26:10,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 1691648. Throughput: 0: 184.6. Samples: 425442. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 16:26:10,575][00196] Avg episode reward: [(0, '10.505')] [2025-02-13 16:26:10,629][10122] Saving new best policy, reward=10.689! [2025-02-13 16:26:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 1695744. Throughput: 0: 187.0. Samples: 426028. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:26:15,575][00196] Avg episode reward: [(0, '10.819')] [2025-02-13 16:26:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 1699840. Throughput: 0: 187.8. Samples: 427486. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:26:20,575][00196] Avg episode reward: [(0, '10.493')] [2025-02-13 16:26:20,810][10122] Saving new best policy, reward=10.819! [2025-02-13 16:26:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 1703936. Throughput: 0: 190.4. Samples: 428488. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:26:25,576][00196] Avg episode reward: [(0, '10.318')] [2025-02-13 16:26:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 1708032. Throughput: 0: 187.6. Samples: 428900. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:26:30,577][00196] Avg episode reward: [(0, '10.636')] [2025-02-13 16:26:35,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.6). Total num frames: 1712128. Throughput: 0: 189.2. Samples: 430322. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:26:35,573][00196] Avg episode reward: [(0, '10.862')] [2025-02-13 16:26:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 777.5). Total num frames: 1716224. Throughput: 0: 192.0. Samples: 431448. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:26:40,579][00196] Avg episode reward: [(0, '10.831')] [2025-02-13 16:26:43,542][10122] Saving new best policy, reward=10.862! [2025-02-13 16:26:43,557][10139] Updated weights for policy 0, policy_version 420 (0.0078) [2025-02-13 16:26:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 1720320. Throughput: 0: 179.4. Samples: 431526. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:26:45,576][00196] Avg episode reward: [(0, '10.670')] [2025-02-13 16:26:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 1724416. Throughput: 0: 187.5. Samples: 432874. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:26:50,577][00196] Avg episode reward: [(0, '10.405')] [2025-02-13 16:26:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 777.5). Total num frames: 1728512. Throughput: 0: 191.0. Samples: 434036. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:26:55,575][00196] Avg episode reward: [(0, '10.301')] [2025-02-13 16:27:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 1732608. Throughput: 0: 188.6. Samples: 434516. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:27:00,577][00196] Avg episode reward: [(0, '10.183')] [2025-02-13 16:27:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 1736704. Throughput: 0: 179.1. Samples: 435544. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:27:05,576][00196] Avg episode reward: [(0, '10.315')] [2025-02-13 16:27:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 1740800. Throughput: 0: 185.0. Samples: 436812. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:27:10,575][00196] Avg episode reward: [(0, '9.857')] [2025-02-13 16:27:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1740800. Throughput: 0: 188.1. Samples: 437364. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:27:15,578][00196] Avg episode reward: [(0, '9.867')] [2025-02-13 16:27:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1744896. Throughput: 0: 183.6. Samples: 438584. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:27:20,572][00196] Avg episode reward: [(0, '9.819')] [2025-02-13 16:27:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1748992. Throughput: 0: 182.3. Samples: 439652. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:27:25,579][00196] Avg episode reward: [(0, '9.966')] [2025-02-13 16:27:30,570][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1753088. Throughput: 0: 188.8. Samples: 440022. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:27:30,576][00196] Avg episode reward: [(0, '10.284')] [2025-02-13 16:27:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1757184. Throughput: 0: 181.6. Samples: 441044. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:27:35,573][00196] Avg episode reward: [(0, '10.229')] [2025-02-13 16:27:38,366][10139] Updated weights for policy 0, policy_version 430 (0.1106) [2025-02-13 16:27:40,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1761280. Throughput: 0: 186.0. Samples: 442408. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:27:40,572][00196] Avg episode reward: [(0, '9.896')] [2025-02-13 16:27:45,573][00196] Fps is (10 sec: 818.9, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1765376. Throughput: 0: 182.4. Samples: 442726. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:27:45,578][00196] Avg episode reward: [(0, '10.189')] [2025-02-13 16:27:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1769472. Throughput: 0: 182.1. Samples: 443740. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:27:50,576][00196] Avg episode reward: [(0, '10.178')] [2025-02-13 16:27:54,120][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000433_1773568.pth... [2025-02-13 16:27:54,229][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000389_1593344.pth [2025-02-13 16:27:55,569][00196] Fps is (10 sec: 819.5, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1773568. Throughput: 0: 186.2. Samples: 445190. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:27:55,573][00196] Avg episode reward: [(0, '10.427')] [2025-02-13 16:28:00,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1777664. Throughput: 0: 186.0. Samples: 445734. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:00,575][00196] Avg episode reward: [(0, '10.505')] [2025-02-13 16:28:05,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 749.8). Total num frames: 1777664. Throughput: 0: 181.3. Samples: 446744. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:05,572][00196] Avg episode reward: [(0, '10.468')] [2025-02-13 16:28:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1785856. Throughput: 0: 181.6. Samples: 447822. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:10,574][00196] Avg episode reward: [(0, '10.677')] [2025-02-13 16:28:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1785856. Throughput: 0: 194.8. Samples: 448788. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:15,575][00196] Avg episode reward: [(0, '11.396')] [2025-02-13 16:28:20,570][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1789952. Throughput: 0: 189.4. Samples: 449568. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:20,584][00196] Avg episode reward: [(0, '11.521')] [2025-02-13 16:28:22,147][10122] Saving new best policy, reward=11.396! [2025-02-13 16:28:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1794048. Throughput: 0: 187.9. Samples: 450862. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:25,574][00196] Avg episode reward: [(0, '11.464')] [2025-02-13 16:28:26,880][10122] Saving new best policy, reward=11.521! [2025-02-13 16:28:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1798144. Throughput: 0: 191.8. Samples: 451356. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:30,573][00196] Avg episode reward: [(0, '11.624')] [2025-02-13 16:28:32,519][10139] Updated weights for policy 0, policy_version 440 (0.0731) [2025-02-13 16:28:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1802240. Throughput: 0: 189.1. Samples: 452248. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:35,572][00196] Avg episode reward: [(0, '11.624')] [2025-02-13 16:28:38,489][10122] Saving new best policy, reward=11.624! [2025-02-13 16:28:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1806336. Throughput: 0: 185.2. Samples: 453522. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:40,576][00196] Avg episode reward: [(0, '11.801')] [2025-02-13 16:28:43,253][10122] Saving new best policy, reward=11.801! [2025-02-13 16:28:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 749.8). Total num frames: 1810432. Throughput: 0: 183.2. Samples: 453978. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:45,571][00196] Avg episode reward: [(0, '11.603')] [2025-02-13 16:28:50,572][00196] Fps is (10 sec: 819.0, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1814528. Throughput: 0: 183.8. Samples: 455016. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:50,575][00196] Avg episode reward: [(0, '11.762')] [2025-02-13 16:28:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1818624. Throughput: 0: 192.2. Samples: 456472. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:28:55,574][00196] Avg episode reward: [(0, '12.434')] [2025-02-13 16:28:58,532][10122] Saving new best policy, reward=12.434! [2025-02-13 16:29:00,569][00196] Fps is (10 sec: 819.4, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1822720. Throughput: 0: 182.6. Samples: 457004. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:29:00,574][00196] Avg episode reward: [(0, '12.318')] [2025-02-13 16:29:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 749.8). Total num frames: 1826816. Throughput: 0: 190.0. Samples: 458116. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:29:05,580][00196] Avg episode reward: [(0, '12.000')] [2025-02-13 16:29:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1830912. Throughput: 0: 186.5. Samples: 459254. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:29:10,580][00196] Avg episode reward: [(0, '12.286')] [2025-02-13 16:29:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 749.8). Total num frames: 1835008. Throughput: 0: 194.5. Samples: 460108. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:29:15,573][00196] Avg episode reward: [(0, '12.494')] [2025-02-13 16:29:20,373][10122] Saving new best policy, reward=12.494! [2025-02-13 16:29:20,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 749.8). Total num frames: 1839104. Throughput: 0: 197.9. Samples: 461154. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:29:20,572][00196] Avg episode reward: [(0, '12.574')] [2025-02-13 16:29:25,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 735.9). Total num frames: 1839104. Throughput: 0: 192.3. Samples: 462174. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:29:25,579][00196] Avg episode reward: [(0, '12.774')] [2025-02-13 16:29:25,911][10122] Saving new best policy, reward=12.574! [2025-02-13 16:29:25,919][10139] Updated weights for policy 0, policy_version 450 (0.0065) [2025-02-13 16:29:28,879][10122] Signal inference workers to stop experience collection... (450 times) [2025-02-13 16:29:28,926][10139] InferenceWorker_p0-w0: stopping experience collection (450 times) [2025-02-13 16:29:30,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 735.9). Total num frames: 1843200. Throughput: 0: 200.8. Samples: 463014. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:29:30,572][00196] Avg episode reward: [(0, '12.521')] [2025-02-13 16:29:30,709][10122] Signal inference workers to resume experience collection... (450 times) [2025-02-13 16:29:30,710][10139] InferenceWorker_p0-w0: resuming experience collection (450 times) [2025-02-13 16:29:30,715][10122] Saving new best policy, reward=12.774! [2025-02-13 16:29:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 735.9). Total num frames: 1847296. Throughput: 0: 205.3. Samples: 464254. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:29:35,572][00196] Avg episode reward: [(0, '13.339')] [2025-02-13 16:29:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1851392. Throughput: 0: 191.5. Samples: 465088. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:29:40,578][00196] Avg episode reward: [(0, '13.509')] [2025-02-13 16:29:42,683][10122] Saving new best policy, reward=13.339! [2025-02-13 16:29:42,828][10122] Saving new best policy, reward=13.509! [2025-02-13 16:29:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1855488. Throughput: 0: 188.8. Samples: 465500. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:29:45,572][00196] Avg episode reward: [(0, '12.837')] [2025-02-13 16:29:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 763.7). Total num frames: 1859584. Throughput: 0: 191.0. Samples: 466710. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:29:50,577][00196] Avg episode reward: [(0, '12.869')] [2025-02-13 16:29:54,075][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000455_1863680.pth... [2025-02-13 16:29:54,205][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000411_1683456.pth [2025-02-13 16:29:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1863680. Throughput: 0: 186.4. Samples: 467642. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:29:55,580][00196] Avg episode reward: [(0, '12.558')] [2025-02-13 16:30:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1867776. Throughput: 0: 181.0. Samples: 468254. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:30:00,582][00196] Avg episode reward: [(0, '12.785')] [2025-02-13 16:30:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1871872. Throughput: 0: 182.2. Samples: 469354. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:30:05,576][00196] Avg episode reward: [(0, '13.102')] [2025-02-13 16:30:10,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 749.8). Total num frames: 1871872. Throughput: 0: 180.6. Samples: 470302. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:30:10,578][00196] Avg episode reward: [(0, '13.196')] [2025-02-13 16:30:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 749.8). Total num frames: 1875968. Throughput: 0: 180.3. Samples: 471126. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:30:15,577][00196] Avg episode reward: [(0, '13.195')] [2025-02-13 16:30:20,507][10139] Updated weights for policy 0, policy_version 460 (0.1272) [2025-02-13 16:30:20,569][00196] Fps is (10 sec: 1228.8, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1884160. Throughput: 0: 181.2. Samples: 472408. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:30:20,573][00196] Avg episode reward: [(0, '13.266')] [2025-02-13 16:30:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1884160. Throughput: 0: 185.6. Samples: 473438. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:30:25,573][00196] Avg episode reward: [(0, '13.092')] [2025-02-13 16:30:30,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1888256. Throughput: 0: 181.4. Samples: 473662. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:30:30,581][00196] Avg episode reward: [(0, '12.998')] [2025-02-13 16:30:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1892352. Throughput: 0: 193.1. Samples: 475400. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:30:35,574][00196] Avg episode reward: [(0, '13.388')] [2025-02-13 16:30:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1896448. Throughput: 0: 192.0. Samples: 476284. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:30:40,574][00196] Avg episode reward: [(0, '13.388')] [2025-02-13 16:30:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1900544. Throughput: 0: 184.5. Samples: 476558. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:30:45,572][00196] Avg episode reward: [(0, '13.275')] [2025-02-13 16:30:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1904640. Throughput: 0: 193.9. Samples: 478080. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:30:50,572][00196] Avg episode reward: [(0, '12.837')] [2025-02-13 16:30:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1908736. Throughput: 0: 198.8. Samples: 479250. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:30:55,576][00196] Avg episode reward: [(0, '12.978')] [2025-02-13 16:31:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1912832. Throughput: 0: 188.6. Samples: 479614. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:31:00,574][00196] Avg episode reward: [(0, '13.086')] [2025-02-13 16:31:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1916928. Throughput: 0: 186.7. Samples: 480808. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:31:05,573][00196] Avg episode reward: [(0, '13.311')] [2025-02-13 16:31:10,573][00196] Fps is (10 sec: 818.9, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 1921024. Throughput: 0: 189.8. Samples: 481978. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:31:10,581][00196] Avg episode reward: [(0, '12.425')] [2025-02-13 16:31:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1921024. Throughput: 0: 197.2. Samples: 482534. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:31:15,582][00196] Avg episode reward: [(0, '12.284')] [2025-02-13 16:31:15,775][10139] Updated weights for policy 0, policy_version 470 (0.1247) [2025-02-13 16:31:20,570][00196] Fps is (10 sec: 819.4, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1929216. Throughput: 0: 182.8. Samples: 483626. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:31:20,573][00196] Avg episode reward: [(0, '12.665')] [2025-02-13 16:31:25,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 1933312. Throughput: 0: 187.6. Samples: 484724. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:31:25,572][00196] Avg episode reward: [(0, '12.633')] [2025-02-13 16:31:30,572][00196] Fps is (10 sec: 409.5, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1933312. Throughput: 0: 195.3. Samples: 485348. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:31:30,579][00196] Avg episode reward: [(0, '12.369')] [2025-02-13 16:31:35,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1937408. Throughput: 0: 189.5. Samples: 486608. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:31:35,579][00196] Avg episode reward: [(0, '12.568')] [2025-02-13 16:31:40,569][00196] Fps is (10 sec: 819.5, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1941504. Throughput: 0: 190.0. Samples: 487802. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:31:40,580][00196] Avg episode reward: [(0, '12.447')] [2025-02-13 16:31:45,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1945600. Throughput: 0: 191.0. Samples: 488210. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:31:45,580][00196] Avg episode reward: [(0, '12.495')] [2025-02-13 16:31:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1949696. Throughput: 0: 187.7. Samples: 489254. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:31:50,573][00196] Avg episode reward: [(0, '12.923')] [2025-02-13 16:31:52,853][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000477_1953792.pth... [2025-02-13 16:31:52,976][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000433_1773568.pth [2025-02-13 16:31:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1953792. Throughput: 0: 197.4. Samples: 490860. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:31:55,574][00196] Avg episode reward: [(0, '13.185')] [2025-02-13 16:32:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1957888. Throughput: 0: 188.5. Samples: 491016. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:32:00,576][00196] Avg episode reward: [(0, '12.836')] [2025-02-13 16:32:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1961984. Throughput: 0: 187.2. Samples: 492048. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:32:05,572][00196] Avg episode reward: [(0, '12.733')] [2025-02-13 16:32:08,960][10139] Updated weights for policy 0, policy_version 480 (0.1754) [2025-02-13 16:32:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 763.7). Total num frames: 1966080. Throughput: 0: 196.9. Samples: 493584. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:32:10,572][00196] Avg episode reward: [(0, '12.474')] [2025-02-13 16:32:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 1970176. Throughput: 0: 193.4. Samples: 494050. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:32:15,573][00196] Avg episode reward: [(0, '12.638')] [2025-02-13 16:32:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1974272. Throughput: 0: 188.1. Samples: 495074. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:32:20,582][00196] Avg episode reward: [(0, '12.571')] [2025-02-13 16:32:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 1978368. Throughput: 0: 172.0. Samples: 495544. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:32:25,579][00196] Avg episode reward: [(0, '12.410')] [2025-02-13 16:32:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 1982464. Throughput: 0: 197.2. Samples: 497086. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:32:30,572][00196] Avg episode reward: [(0, '12.654')] [2025-02-13 16:32:35,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1982464. Throughput: 0: 191.3. Samples: 497864. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:32:35,583][00196] Avg episode reward: [(0, '12.402')] [2025-02-13 16:32:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1986560. Throughput: 0: 185.2. Samples: 499196. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:32:40,576][00196] Avg episode reward: [(0, '12.571')] [2025-02-13 16:32:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1990656. Throughput: 0: 194.7. Samples: 499776. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:32:45,573][00196] Avg episode reward: [(0, '13.058')] [2025-02-13 16:32:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1994752. Throughput: 0: 188.9. Samples: 500550. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:32:50,574][00196] Avg episode reward: [(0, '13.259')] [2025-02-13 16:32:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 1998848. Throughput: 0: 186.2. Samples: 501964. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:32:55,573][00196] Avg episode reward: [(0, '12.845')] [2025-02-13 16:33:00,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2002944. Throughput: 0: 182.7. Samples: 502270. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:33:00,579][00196] Avg episode reward: [(0, '12.840')] [2025-02-13 16:33:04,293][10139] Updated weights for policy 0, policy_version 490 (0.2338) [2025-02-13 16:33:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2007040. Throughput: 0: 182.0. Samples: 503266. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:33:05,573][00196] Avg episode reward: [(0, '12.601')] [2025-02-13 16:33:10,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2011136. Throughput: 0: 198.9. Samples: 504494. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:33:10,582][00196] Avg episode reward: [(0, '12.504')] [2025-02-13 16:33:15,576][00196] Fps is (10 sec: 818.7, 60 sec: 750.9, 300 sec: 763.6). Total num frames: 2015232. Throughput: 0: 181.5. Samples: 505254. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:33:15,588][00196] Avg episode reward: [(0, '12.484')] [2025-02-13 16:33:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2019328. Throughput: 0: 187.2. Samples: 506288. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:33:20,574][00196] Avg episode reward: [(0, '12.139')] [2025-02-13 16:33:25,569][00196] Fps is (10 sec: 819.7, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2023424. Throughput: 0: 184.0. Samples: 507478. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:33:25,572][00196] Avg episode reward: [(0, '12.420')] [2025-02-13 16:33:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2027520. Throughput: 0: 190.8. Samples: 508364. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:33:30,579][00196] Avg episode reward: [(0, '12.428')] [2025-02-13 16:33:35,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2027520. Throughput: 0: 196.5. Samples: 509394. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:33:35,573][00196] Avg episode reward: [(0, '12.736')] [2025-02-13 16:33:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2035712. Throughput: 0: 188.3. Samples: 510438. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:33:40,572][00196] Avg episode reward: [(0, '13.029')] [2025-02-13 16:33:45,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2039808. Throughput: 0: 203.1. Samples: 511410. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:33:45,579][00196] Avg episode reward: [(0, '13.417')] [2025-02-13 16:33:50,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2039808. Throughput: 0: 199.0. Samples: 512220. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:33:50,573][00196] Avg episode reward: [(0, '13.485')] [2025-02-13 16:33:52,104][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000499_2043904.pth... [2025-02-13 16:33:52,216][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000455_1863680.pth [2025-02-13 16:33:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2043904. Throughput: 0: 200.2. Samples: 513502. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:33:55,581][00196] Avg episode reward: [(0, '13.698')] [2025-02-13 16:33:57,212][10139] Updated weights for policy 0, policy_version 500 (0.2294) [2025-02-13 16:34:00,016][10122] Signal inference workers to stop experience collection... (500 times) [2025-02-13 16:34:00,104][10139] InferenceWorker_p0-w0: stopping experience collection (500 times) [2025-02-13 16:34:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2048000. Throughput: 0: 192.6. Samples: 513922. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:34:00,574][00196] Avg episode reward: [(0, '13.814')] [2025-02-13 16:34:01,483][10122] Signal inference workers to resume experience collection... (500 times) [2025-02-13 16:34:01,483][10139] InferenceWorker_p0-w0: resuming experience collection (500 times) [2025-02-13 16:34:01,485][10122] Saving new best policy, reward=13.698! [2025-02-13 16:34:05,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2052096. Throughput: 0: 197.5. Samples: 515174. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:34:05,574][00196] Avg episode reward: [(0, '13.971')] [2025-02-13 16:34:08,025][10122] Saving new best policy, reward=13.814! [2025-02-13 16:34:08,151][10122] Saving new best policy, reward=13.971! [2025-02-13 16:34:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2056192. Throughput: 0: 197.7. Samples: 516374. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:34:10,573][00196] Avg episode reward: [(0, '14.091')] [2025-02-13 16:34:13,006][10122] Saving new best policy, reward=14.091! [2025-02-13 16:34:15,569][00196] Fps is (10 sec: 819.3, 60 sec: 751.0, 300 sec: 749.8). Total num frames: 2060288. Throughput: 0: 185.3. Samples: 516704. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:34:15,572][00196] Avg episode reward: [(0, '14.170')] [2025-02-13 16:34:18,078][10122] Saving new best policy, reward=14.170! [2025-02-13 16:34:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2064384. Throughput: 0: 187.7. Samples: 517842. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:34:20,572][00196] Avg episode reward: [(0, '14.147')] [2025-02-13 16:34:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2068480. Throughput: 0: 187.6. Samples: 518878. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:34:25,577][00196] Avg episode reward: [(0, '14.156')] [2025-02-13 16:34:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2072576. Throughput: 0: 181.2. Samples: 519566. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:34:30,572][00196] Avg episode reward: [(0, '14.202')] [2025-02-13 16:34:33,896][10122] Saving new best policy, reward=14.202! [2025-02-13 16:34:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2076672. Throughput: 0: 188.2. Samples: 520688. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:34:35,572][00196] Avg episode reward: [(0, '14.529')] [2025-02-13 16:34:40,516][10122] Saving new best policy, reward=14.529! [2025-02-13 16:34:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2080768. Throughput: 0: 178.1. Samples: 521518. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:34:40,582][00196] Avg episode reward: [(0, '14.824')] [2025-02-13 16:34:45,335][10122] Saving new best policy, reward=14.824! [2025-02-13 16:34:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2084864. Throughput: 0: 189.8. Samples: 522464. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:34:45,576][00196] Avg episode reward: [(0, '14.844')] [2025-02-13 16:34:50,491][10122] Saving new best policy, reward=14.844! [2025-02-13 16:34:50,502][10139] Updated weights for policy 0, policy_version 510 (0.0578) [2025-02-13 16:34:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2088960. Throughput: 0: 186.5. Samples: 523564. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:34:50,572][00196] Avg episode reward: [(0, '15.295')] [2025-02-13 16:34:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2088960. Throughput: 0: 181.2. Samples: 524530. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:34:55,579][00196] Avg episode reward: [(0, '15.205')] [2025-02-13 16:34:56,701][10122] Saving new best policy, reward=15.295! [2025-02-13 16:35:00,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2093056. Throughput: 0: 188.4. Samples: 525182. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:00,573][00196] Avg episode reward: [(0, '15.379')] [2025-02-13 16:35:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 763.7). Total num frames: 2097152. Throughput: 0: 195.8. Samples: 526654. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:05,573][00196] Avg episode reward: [(0, '15.388')] [2025-02-13 16:35:06,575][10122] Saving new best policy, reward=15.379! [2025-02-13 16:35:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2101248. Throughput: 0: 189.2. Samples: 527392. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:10,573][00196] Avg episode reward: [(0, '15.267')] [2025-02-13 16:35:12,860][10122] Saving new best policy, reward=15.388! [2025-02-13 16:35:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2105344. Throughput: 0: 185.3. Samples: 527904. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:15,582][00196] Avg episode reward: [(0, '15.023')] [2025-02-13 16:35:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2109440. Throughput: 0: 194.8. Samples: 529456. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:20,576][00196] Avg episode reward: [(0, '14.728')] [2025-02-13 16:35:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2113536. Throughput: 0: 195.3. Samples: 530306. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:25,573][00196] Avg episode reward: [(0, '14.500')] [2025-02-13 16:35:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2117632. Throughput: 0: 184.2. Samples: 530754. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:30,573][00196] Avg episode reward: [(0, '14.542')] [2025-02-13 16:35:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2121728. Throughput: 0: 188.8. Samples: 532062. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:35,573][00196] Avg episode reward: [(0, '14.239')] [2025-02-13 16:35:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2125824. Throughput: 0: 190.0. Samples: 533078. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:40,576][00196] Avg episode reward: [(0, '14.225')] [2025-02-13 16:35:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2129920. Throughput: 0: 188.9. Samples: 533684. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:45,576][00196] Avg episode reward: [(0, '14.427')] [2025-02-13 16:35:45,797][10139] Updated weights for policy 0, policy_version 520 (0.0669) [2025-02-13 16:35:50,571][00196] Fps is (10 sec: 819.0, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2134016. Throughput: 0: 182.5. Samples: 534866. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:50,575][00196] Avg episode reward: [(0, '14.534')] [2025-02-13 16:35:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2134016. Throughput: 0: 189.2. Samples: 535906. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:35:55,573][00196] Avg episode reward: [(0, '14.257')] [2025-02-13 16:35:55,716][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000522_2138112.pth... [2025-02-13 16:35:55,895][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000477_1953792.pth [2025-02-13 16:36:00,569][00196] Fps is (10 sec: 409.7, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2138112. Throughput: 0: 189.8. Samples: 536446. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:00,573][00196] Avg episode reward: [(0, '14.152')] [2025-02-13 16:36:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2142208. Throughput: 0: 186.0. Samples: 537824. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:05,586][00196] Avg episode reward: [(0, '14.330')] [2025-02-13 16:36:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2146304. Throughput: 0: 191.7. Samples: 538934. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:10,579][00196] Avg episode reward: [(0, '14.480')] [2025-02-13 16:36:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2150400. Throughput: 0: 188.8. Samples: 539248. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:15,573][00196] Avg episode reward: [(0, '14.029')] [2025-02-13 16:36:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2154496. Throughput: 0: 190.3. Samples: 540624. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:20,583][00196] Avg episode reward: [(0, '14.321')] [2025-02-13 16:36:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2158592. Throughput: 0: 196.3. Samples: 541912. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:25,574][00196] Avg episode reward: [(0, '14.516')] [2025-02-13 16:36:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2162688. Throughput: 0: 188.6. Samples: 542172. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:30,573][00196] Avg episode reward: [(0, '14.598')] [2025-02-13 16:36:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2166784. Throughput: 0: 191.0. Samples: 543462. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:35,572][00196] Avg episode reward: [(0, '14.526')] [2025-02-13 16:36:38,211][10139] Updated weights for policy 0, policy_version 530 (0.1153) [2025-02-13 16:36:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2170880. Throughput: 0: 201.2. Samples: 544958. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:40,573][00196] Avg episode reward: [(0, '14.894')] [2025-02-13 16:36:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2174976. Throughput: 0: 192.7. Samples: 545118. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:45,575][00196] Avg episode reward: [(0, '14.727')] [2025-02-13 16:36:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 763.7). Total num frames: 2179072. Throughput: 0: 184.1. Samples: 546108. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:50,576][00196] Avg episode reward: [(0, '14.791')] [2025-02-13 16:36:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2183168. Throughput: 0: 189.0. Samples: 547438. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:36:55,572][00196] Avg episode reward: [(0, '15.122')] [2025-02-13 16:37:00,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2183168. Throughput: 0: 196.7. Samples: 548100. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:37:00,572][00196] Avg episode reward: [(0, '15.782')] [2025-02-13 16:37:05,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2187264. Throughput: 0: 189.8. Samples: 549166. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:37:05,582][00196] Avg episode reward: [(0, '16.038')] [2025-02-13 16:37:06,177][10122] Saving new best policy, reward=15.782! [2025-02-13 16:37:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2191360. Throughput: 0: 183.5. Samples: 550170. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:37:10,573][00196] Avg episode reward: [(0, '16.216')] [2025-02-13 16:37:11,122][10122] Saving new best policy, reward=16.038! [2025-02-13 16:37:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2195456. Throughput: 0: 188.7. Samples: 550662. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:37:15,573][00196] Avg episode reward: [(0, '16.447')] [2025-02-13 16:37:17,932][10122] Saving new best policy, reward=16.216! [2025-02-13 16:37:18,062][10122] Saving new best policy, reward=16.447! [2025-02-13 16:37:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2199552. Throughput: 0: 186.5. Samples: 551856. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:37:20,572][00196] Avg episode reward: [(0, '16.691')] [2025-02-13 16:37:22,353][10122] Saving new best policy, reward=16.691! [2025-02-13 16:37:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2203648. Throughput: 0: 186.1. Samples: 553334. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:37:25,572][00196] Avg episode reward: [(0, '16.951')] [2025-02-13 16:37:30,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2207744. Throughput: 0: 192.7. Samples: 553788. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:37:30,573][00196] Avg episode reward: [(0, '16.918')] [2025-02-13 16:37:33,243][10122] Saving new best policy, reward=16.951! [2025-02-13 16:37:33,250][10139] Updated weights for policy 0, policy_version 540 (0.0659) [2025-02-13 16:37:35,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2211840. Throughput: 0: 192.0. Samples: 554748. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:37:35,572][00196] Avg episode reward: [(0, '16.610')] [2025-02-13 16:37:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2215936. Throughput: 0: 197.1. Samples: 556308. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:37:40,575][00196] Avg episode reward: [(0, '16.690')] [2025-02-13 16:37:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2220032. Throughput: 0: 190.2. Samples: 556660. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:37:45,574][00196] Avg episode reward: [(0, '16.870')] [2025-02-13 16:37:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2224128. Throughput: 0: 185.8. Samples: 557526. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:37:50,573][00196] Avg episode reward: [(0, '16.948')] [2025-02-13 16:37:53,892][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000544_2228224.pth... [2025-02-13 16:37:54,039][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000499_2043904.pth [2025-02-13 16:37:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2228224. Throughput: 0: 196.0. Samples: 558990. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:37:55,579][00196] Avg episode reward: [(0, '17.308')] [2025-02-13 16:37:58,586][10122] Saving new best policy, reward=17.308! [2025-02-13 16:38:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2232320. Throughput: 0: 195.6. Samples: 559464. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:38:00,578][00196] Avg episode reward: [(0, '17.374')] [2025-02-13 16:38:05,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2232320. Throughput: 0: 191.4. Samples: 560470. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:38:05,579][00196] Avg episode reward: [(0, '16.632')] [2025-02-13 16:38:05,918][10122] Saving new best policy, reward=17.374! [2025-02-13 16:38:10,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2236416. Throughput: 0: 180.8. Samples: 561468. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:38:10,573][00196] Avg episode reward: [(0, '16.470')] [2025-02-13 16:38:15,572][00196] Fps is (10 sec: 1228.5, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2244608. Throughput: 0: 187.5. Samples: 562228. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:38:15,576][00196] Avg episode reward: [(0, '16.326')] [2025-02-13 16:38:20,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2244608. Throughput: 0: 190.6. Samples: 563326. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:38:20,575][00196] Avg episode reward: [(0, '16.183')] [2025-02-13 16:38:25,569][00196] Fps is (10 sec: 409.7, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2248704. Throughput: 0: 183.9. Samples: 564582. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:38:25,572][00196] Avg episode reward: [(0, '16.622')] [2025-02-13 16:38:25,799][10139] Updated weights for policy 0, policy_version 550 (0.1066) [2025-02-13 16:38:28,442][10122] Signal inference workers to stop experience collection... (550 times) [2025-02-13 16:38:28,492][10139] InferenceWorker_p0-w0: stopping experience collection (550 times) [2025-02-13 16:38:30,246][10122] Signal inference workers to resume experience collection... (550 times) [2025-02-13 16:38:30,249][10139] InferenceWorker_p0-w0: resuming experience collection (550 times) [2025-02-13 16:38:30,569][00196] Fps is (10 sec: 1229.0, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2256896. Throughput: 0: 197.5. Samples: 565548. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:38:30,575][00196] Avg episode reward: [(0, '16.794')] [2025-02-13 16:38:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2256896. Throughput: 0: 196.1. Samples: 566350. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:38:35,572][00196] Avg episode reward: [(0, '17.453')] [2025-02-13 16:38:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 2260992. Throughput: 0: 190.6. Samples: 567568. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:38:40,573][00196] Avg episode reward: [(0, '16.802')] [2025-02-13 16:38:40,975][10122] Saving new best policy, reward=17.453! [2025-02-13 16:38:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2265088. Throughput: 0: 199.6. Samples: 568446. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:38:45,583][00196] Avg episode reward: [(0, '15.734')] [2025-02-13 16:38:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2269184. Throughput: 0: 202.7. Samples: 569590. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:38:50,574][00196] Avg episode reward: [(0, '15.909')] [2025-02-13 16:38:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2273280. Throughput: 0: 203.7. Samples: 570636. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:38:55,579][00196] Avg episode reward: [(0, '15.373')] [2025-02-13 16:39:00,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2281472. Throughput: 0: 207.0. Samples: 571542. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:39:00,579][00196] Avg episode reward: [(0, '15.267')] [2025-02-13 16:39:05,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2281472. Throughput: 0: 207.2. Samples: 572650. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:39:05,572][00196] Avg episode reward: [(0, '15.550')] [2025-02-13 16:39:10,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2285568. Throughput: 0: 203.6. Samples: 573744. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:39:10,574][00196] Avg episode reward: [(0, '15.609')] [2025-02-13 16:39:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 763.7). Total num frames: 2289664. Throughput: 0: 198.0. Samples: 574456. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:39:15,575][00196] Avg episode reward: [(0, '15.990')] [2025-02-13 16:39:16,360][10139] Updated weights for policy 0, policy_version 560 (0.0564) [2025-02-13 16:39:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2293760. Throughput: 0: 204.2. Samples: 575540. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:39:20,578][00196] Avg episode reward: [(0, '16.233')] [2025-02-13 16:39:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2297856. Throughput: 0: 203.9. Samples: 576744. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:39:25,572][00196] Avg episode reward: [(0, '15.552')] [2025-02-13 16:39:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2301952. Throughput: 0: 189.4. Samples: 576968. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:39:30,579][00196] Avg episode reward: [(0, '15.366')] [2025-02-13 16:39:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2306048. Throughput: 0: 196.2. Samples: 578420. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:39:35,579][00196] Avg episode reward: [(0, '15.035')] [2025-02-13 16:39:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2310144. Throughput: 0: 193.0. Samples: 579322. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:39:40,579][00196] Avg episode reward: [(0, '14.961')] [2025-02-13 16:39:45,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2314240. Throughput: 0: 187.8. Samples: 579992. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:39:45,579][00196] Avg episode reward: [(0, '15.285')] [2025-02-13 16:39:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2318336. Throughput: 0: 193.3. Samples: 581348. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:39:50,576][00196] Avg episode reward: [(0, '15.376')] [2025-02-13 16:39:54,417][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000567_2322432.pth... [2025-02-13 16:39:54,536][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000522_2138112.pth [2025-02-13 16:39:55,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2322432. Throughput: 0: 191.1. Samples: 582344. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:39:55,572][00196] Avg episode reward: [(0, '15.382')] [2025-02-13 16:40:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2326528. Throughput: 0: 188.9. Samples: 582956. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:40:00,573][00196] Avg episode reward: [(0, '15.429')] [2025-02-13 16:40:05,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2330624. Throughput: 0: 192.5. Samples: 584202. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:40:05,577][00196] Avg episode reward: [(0, '15.032')] [2025-02-13 16:40:10,193][10139] Updated weights for policy 0, policy_version 570 (0.1129) [2025-02-13 16:40:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2334720. Throughput: 0: 185.2. Samples: 585078. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:40:10,574][00196] Avg episode reward: [(0, '15.063')] [2025-02-13 16:40:15,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2338816. Throughput: 0: 201.5. Samples: 586034. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:40:15,576][00196] Avg episode reward: [(0, '15.147')] [2025-02-13 16:40:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2342912. Throughput: 0: 192.8. Samples: 587098. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:40:20,576][00196] Avg episode reward: [(0, '15.115')] [2025-02-13 16:40:25,569][00196] Fps is (10 sec: 409.7, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2342912. Throughput: 0: 194.7. Samples: 588082. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:40:25,577][00196] Avg episode reward: [(0, '15.093')] [2025-02-13 16:40:30,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2347008. Throughput: 0: 199.5. Samples: 588968. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:40:30,573][00196] Avg episode reward: [(0, '15.953')] [2025-02-13 16:40:35,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2355200. Throughput: 0: 195.2. Samples: 590132. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:40:35,572][00196] Avg episode reward: [(0, '16.040')] [2025-02-13 16:40:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2355200. Throughput: 0: 196.0. Samples: 591164. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:40:40,573][00196] Avg episode reward: [(0, '16.129')] [2025-02-13 16:40:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2359296. Throughput: 0: 189.0. Samples: 591460. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:40:45,581][00196] Avg episode reward: [(0, '16.482')] [2025-02-13 16:40:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2363392. Throughput: 0: 200.0. Samples: 593202. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:40:50,581][00196] Avg episode reward: [(0, '16.680')] [2025-02-13 16:40:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2367488. Throughput: 0: 198.7. Samples: 594018. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:40:55,573][00196] Avg episode reward: [(0, '16.092')] [2025-02-13 16:41:00,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2371584. Throughput: 0: 189.5. Samples: 594560. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:41:00,580][00196] Avg episode reward: [(0, '15.630')] [2025-02-13 16:41:01,995][10139] Updated weights for policy 0, policy_version 580 (0.2206) [2025-02-13 16:41:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2375680. Throughput: 0: 201.0. Samples: 596144. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:41:05,578][00196] Avg episode reward: [(0, '15.980')] [2025-02-13 16:41:10,573][00196] Fps is (10 sec: 818.9, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2379776. Throughput: 0: 197.4. Samples: 596966. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:41:10,580][00196] Avg episode reward: [(0, '16.121')] [2025-02-13 16:41:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 777.5). Total num frames: 2383872. Throughput: 0: 185.7. Samples: 597324. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:41:15,573][00196] Avg episode reward: [(0, '16.882')] [2025-02-13 16:41:20,569][00196] Fps is (10 sec: 819.5, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2387968. Throughput: 0: 189.3. Samples: 598650. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:41:20,573][00196] Avg episode reward: [(0, '17.279')] [2025-02-13 16:41:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2392064. Throughput: 0: 198.1. Samples: 600080. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:41:25,577][00196] Avg episode reward: [(0, '17.219')] [2025-02-13 16:41:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2396160. Throughput: 0: 199.2. Samples: 600422. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:41:30,572][00196] Avg episode reward: [(0, '17.526')] [2025-02-13 16:41:33,747][10122] Saving new best policy, reward=17.526! [2025-02-13 16:41:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2400256. Throughput: 0: 188.2. Samples: 601670. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:41:35,572][00196] Avg episode reward: [(0, '17.779')] [2025-02-13 16:41:38,495][10122] Saving new best policy, reward=17.779! [2025-02-13 16:41:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2404352. Throughput: 0: 196.1. Samples: 602842. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:41:40,574][00196] Avg episode reward: [(0, '17.568')] [2025-02-13 16:41:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2408448. Throughput: 0: 197.4. Samples: 603444. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:41:45,573][00196] Avg episode reward: [(0, '17.713')] [2025-02-13 16:41:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2412544. Throughput: 0: 185.3. Samples: 604484. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:41:50,573][00196] Avg episode reward: [(0, '18.357')] [2025-02-13 16:41:54,853][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000590_2416640.pth... [2025-02-13 16:41:54,853][10139] Updated weights for policy 0, policy_version 590 (0.1107) [2025-02-13 16:41:55,092][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000544_2228224.pth [2025-02-13 16:41:55,121][10122] Saving new best policy, reward=18.357! [2025-02-13 16:41:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2416640. Throughput: 0: 193.3. Samples: 605664. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:41:55,573][00196] Avg episode reward: [(0, '18.937')] [2025-02-13 16:42:00,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2416640. Throughput: 0: 198.7. Samples: 606264. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:42:00,579][00196] Avg episode reward: [(0, '19.034')] [2025-02-13 16:42:01,480][10122] Saving new best policy, reward=18.937! [2025-02-13 16:42:05,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2420736. Throughput: 0: 198.6. Samples: 607588. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:42:05,579][00196] Avg episode reward: [(0, '19.395')] [2025-02-13 16:42:05,984][10122] Saving new best policy, reward=19.034! [2025-02-13 16:42:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 777.5). Total num frames: 2424832. Throughput: 0: 190.0. Samples: 608632. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:42:10,572][00196] Avg episode reward: [(0, '18.793')] [2025-02-13 16:42:10,791][10122] Saving new best policy, reward=19.395! [2025-02-13 16:42:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2428928. Throughput: 0: 193.9. Samples: 609146. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:42:15,575][00196] Avg episode reward: [(0, '18.697')] [2025-02-13 16:42:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2433024. Throughput: 0: 192.8. Samples: 610344. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:42:20,574][00196] Avg episode reward: [(0, '18.594')] [2025-02-13 16:42:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2437120. Throughput: 0: 195.6. Samples: 611646. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:42:25,572][00196] Avg episode reward: [(0, '19.050')] [2025-02-13 16:42:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2441216. Throughput: 0: 194.4. Samples: 612194. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:42:30,573][00196] Avg episode reward: [(0, '19.121')] [2025-02-13 16:42:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2445312. Throughput: 0: 193.3. Samples: 613184. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:42:35,572][00196] Avg episode reward: [(0, '19.183')] [2025-02-13 16:42:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2449408. Throughput: 0: 201.3. Samples: 614722. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:42:40,582][00196] Avg episode reward: [(0, '19.455')] [2025-02-13 16:42:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2453504. Throughput: 0: 199.6. Samples: 615248. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:42:45,573][00196] Avg episode reward: [(0, '19.329')] [2025-02-13 16:42:48,120][10122] Saving new best policy, reward=19.455! [2025-02-13 16:42:48,128][10139] Updated weights for policy 0, policy_version 600 (0.1116) [2025-02-13 16:42:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2457600. Throughput: 0: 191.3. Samples: 616198. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:42:50,573][00196] Avg episode reward: [(0, '19.338')] [2025-02-13 16:42:50,894][10122] Signal inference workers to stop experience collection... (600 times) [2025-02-13 16:42:50,952][10139] InferenceWorker_p0-w0: stopping experience collection (600 times) [2025-02-13 16:42:52,654][10122] Signal inference workers to resume experience collection... (600 times) [2025-02-13 16:42:52,657][10139] InferenceWorker_p0-w0: resuming experience collection (600 times) [2025-02-13 16:42:55,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2461696. Throughput: 0: 201.6. Samples: 617704. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:42:55,572][00196] Avg episode reward: [(0, '19.330')] [2025-02-13 16:43:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2465792. Throughput: 0: 203.5. Samples: 618304. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:00,572][00196] Avg episode reward: [(0, '19.490')] [2025-02-13 16:43:03,065][10122] Saving new best policy, reward=19.490! [2025-02-13 16:43:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2469888. Throughput: 0: 199.4. Samples: 619318. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:05,573][00196] Avg episode reward: [(0, '19.466')] [2025-02-13 16:43:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.6). Total num frames: 2473984. Throughput: 0: 204.2. Samples: 620834. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:10,578][00196] Avg episode reward: [(0, '19.423')] [2025-02-13 16:43:15,573][00196] Fps is (10 sec: 818.9, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2478080. Throughput: 0: 202.3. Samples: 621300. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:15,575][00196] Avg episode reward: [(0, '19.272')] [2025-02-13 16:43:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2482176. Throughput: 0: 204.2. Samples: 622374. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:20,572][00196] Avg episode reward: [(0, '19.053')] [2025-02-13 16:43:25,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2486272. Throughput: 0: 204.7. Samples: 623934. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:25,581][00196] Avg episode reward: [(0, '19.148')] [2025-02-13 16:43:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2490368. Throughput: 0: 208.3. Samples: 624620. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:30,572][00196] Avg episode reward: [(0, '19.394')] [2025-02-13 16:43:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2494464. Throughput: 0: 205.6. Samples: 625448. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:35,579][00196] Avg episode reward: [(0, '19.209')] [2025-02-13 16:43:37,950][10139] Updated weights for policy 0, policy_version 610 (0.2633) [2025-02-13 16:43:40,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2498560. Throughput: 0: 205.1. Samples: 626934. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:40,573][00196] Avg episode reward: [(0, '18.996')] [2025-02-13 16:43:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2502656. Throughput: 0: 206.0. Samples: 627576. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:45,578][00196] Avg episode reward: [(0, '18.359')] [2025-02-13 16:43:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2506752. Throughput: 0: 202.9. Samples: 628448. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:50,574][00196] Avg episode reward: [(0, '18.286')] [2025-02-13 16:43:52,755][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000613_2510848.pth... [2025-02-13 16:43:52,873][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000567_2322432.pth [2025-02-13 16:43:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2510848. Throughput: 0: 204.4. Samples: 630030. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:43:55,583][00196] Avg episode reward: [(0, '17.917')] [2025-02-13 16:44:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2514944. Throughput: 0: 200.3. Samples: 630314. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:44:00,579][00196] Avg episode reward: [(0, '18.041')] [2025-02-13 16:44:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2519040. Throughput: 0: 199.6. Samples: 631358. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:44:05,573][00196] Avg episode reward: [(0, '18.303')] [2025-02-13 16:44:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2523136. Throughput: 0: 204.3. Samples: 633128. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:44:10,573][00196] Avg episode reward: [(0, '18.860')] [2025-02-13 16:44:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2527232. Throughput: 0: 202.7. Samples: 633742. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:44:15,575][00196] Avg episode reward: [(0, '18.986')] [2025-02-13 16:44:20,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2531328. Throughput: 0: 208.4. Samples: 634824. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:44:20,579][00196] Avg episode reward: [(0, '19.054')] [2025-02-13 16:44:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2535424. Throughput: 0: 205.9. Samples: 636198. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:44:25,576][00196] Avg episode reward: [(0, '18.973')] [2025-02-13 16:44:26,948][10139] Updated weights for policy 0, policy_version 620 (0.1204) [2025-02-13 16:44:30,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2539520. Throughput: 0: 203.9. Samples: 636752. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:44:30,581][00196] Avg episode reward: [(0, '18.733')] [2025-02-13 16:44:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2543616. Throughput: 0: 208.3. Samples: 637820. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:44:35,572][00196] Avg episode reward: [(0, '18.497')] [2025-02-13 16:44:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2547712. Throughput: 0: 206.0. Samples: 639302. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:44:40,574][00196] Avg episode reward: [(0, '18.385')] [2025-02-13 16:44:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2551808. Throughput: 0: 207.0. Samples: 639630. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:44:45,575][00196] Avg episode reward: [(0, '18.655')] [2025-02-13 16:44:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2555904. Throughput: 0: 216.2. Samples: 641086. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 16:44:50,573][00196] Avg episode reward: [(0, '19.034')] [2025-02-13 16:44:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2560000. Throughput: 0: 205.7. Samples: 642386. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:44:55,580][00196] Avg episode reward: [(0, '18.924')] [2025-02-13 16:45:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2564096. Throughput: 0: 203.4. Samples: 642894. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:45:00,574][00196] Avg episode reward: [(0, '19.051')] [2025-02-13 16:45:05,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2568192. Throughput: 0: 205.1. Samples: 644052. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:45:05,576][00196] Avg episode reward: [(0, '18.516')] [2025-02-13 16:45:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2572288. Throughput: 0: 199.8. Samples: 645188. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:45:10,575][00196] Avg episode reward: [(0, '19.254')] [2025-02-13 16:45:15,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2576384. Throughput: 0: 200.3. Samples: 645764. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:45:15,578][00196] Avg episode reward: [(0, '18.991')] [2025-02-13 16:45:16,967][10139] Updated weights for policy 0, policy_version 630 (0.1128) [2025-02-13 16:45:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2580480. Throughput: 0: 211.2. Samples: 647322. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:45:20,577][00196] Avg episode reward: [(0, '18.808')] [2025-02-13 16:45:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2584576. Throughput: 0: 200.6. Samples: 648328. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:45:25,572][00196] Avg episode reward: [(0, '19.374')] [2025-02-13 16:45:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2588672. Throughput: 0: 200.8. Samples: 648666. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:45:30,573][00196] Avg episode reward: [(0, '19.192')] [2025-02-13 16:45:35,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2592768. Throughput: 0: 203.7. Samples: 650252. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:45:35,576][00196] Avg episode reward: [(0, '18.981')] [2025-02-13 16:45:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2596864. Throughput: 0: 200.8. Samples: 651420. Policy #0 lag: (min: 1.0, avg: 1.3, max: 3.0) [2025-02-13 16:45:40,572][00196] Avg episode reward: [(0, '19.421')] [2025-02-13 16:45:45,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2600960. Throughput: 0: 200.8. Samples: 651928. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:45:45,572][00196] Avg episode reward: [(0, '19.923')] [2025-02-13 16:45:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2605056. Throughput: 0: 208.4. Samples: 653428. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:45:50,577][00196] Avg episode reward: [(0, '19.631')] [2025-02-13 16:45:53,382][10122] Saving new best policy, reward=19.923! [2025-02-13 16:45:53,504][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000637_2609152.pth... [2025-02-13 16:45:53,675][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000590_2416640.pth [2025-02-13 16:45:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2609152. Throughput: 0: 201.3. Samples: 654246. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:45:55,572][00196] Avg episode reward: [(0, '19.406')] [2025-02-13 16:46:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2613248. Throughput: 0: 200.1. Samples: 654768. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:46:00,572][00196] Avg episode reward: [(0, '18.456')] [2025-02-13 16:46:05,574][00196] Fps is (10 sec: 818.9, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2617344. Throughput: 0: 198.2. Samples: 656244. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:46:05,585][00196] Avg episode reward: [(0, '18.449')] [2025-02-13 16:46:08,641][10139] Updated weights for policy 0, policy_version 640 (0.1552) [2025-02-13 16:46:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2621440. Throughput: 0: 199.7. Samples: 657314. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:46:10,575][00196] Avg episode reward: [(0, '18.723')] [2025-02-13 16:46:15,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2625536. Throughput: 0: 204.2. Samples: 657856. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:46:15,580][00196] Avg episode reward: [(0, '18.538')] [2025-02-13 16:46:20,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2629632. Throughput: 0: 199.5. Samples: 659230. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:46:20,587][00196] Avg episode reward: [(0, '18.449')] [2025-02-13 16:46:25,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2633728. Throughput: 0: 195.8. Samples: 660230. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:46:25,580][00196] Avg episode reward: [(0, '18.919')] [2025-02-13 16:46:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2637824. Throughput: 0: 200.3. Samples: 660942. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:46:30,579][00196] Avg episode reward: [(0, '19.699')] [2025-02-13 16:46:35,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2641920. Throughput: 0: 198.3. Samples: 662352. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:46:35,573][00196] Avg episode reward: [(0, '19.586')] [2025-02-13 16:46:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 2646016. Throughput: 0: 200.3. Samples: 663260. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:46:40,577][00196] Avg episode reward: [(0, '19.586')] [2025-02-13 16:46:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 2646016. Throughput: 0: 204.3. Samples: 663960. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:46:45,574][00196] Avg episode reward: [(0, '19.519')] [2025-02-13 16:46:50,582][00196] Fps is (10 sec: 409.1, 60 sec: 750.8, 300 sec: 791.4). Total num frames: 2650112. Throughput: 0: 179.9. Samples: 664340. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:46:50,585][00196] Avg episode reward: [(0, '19.352')] [2025-02-13 16:46:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 2650112. Throughput: 0: 170.1. Samples: 664968. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:46:55,573][00196] Avg episode reward: [(0, '19.908')] [2025-02-13 16:47:00,569][00196] Fps is (10 sec: 410.1, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 2654208. Throughput: 0: 176.8. Samples: 665810. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:47:00,579][00196] Avg episode reward: [(0, '19.892')] [2025-02-13 16:47:05,230][10139] Updated weights for policy 0, policy_version 650 (0.0621) [2025-02-13 16:47:05,569][00196] Fps is (10 sec: 1228.8, 60 sec: 751.0, 300 sec: 805.3). Total num frames: 2662400. Throughput: 0: 173.7. Samples: 667048. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:47:05,576][00196] Avg episode reward: [(0, '19.934')] [2025-02-13 16:47:08,982][10122] Signal inference workers to stop experience collection... (650 times) [2025-02-13 16:47:09,077][10139] InferenceWorker_p0-w0: stopping experience collection (650 times) [2025-02-13 16:47:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 2662400. Throughput: 0: 174.8. Samples: 668096. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:47:10,577][00196] Avg episode reward: [(0, '20.180')] [2025-02-13 16:47:11,401][10122] Signal inference workers to resume experience collection... (650 times) [2025-02-13 16:47:11,403][10139] InferenceWorker_p0-w0: resuming experience collection (650 times) [2025-02-13 16:47:11,410][10122] Saving new best policy, reward=19.934! [2025-02-13 16:47:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 2666496. Throughput: 0: 174.3. Samples: 668786. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:47:15,577][00196] Avg episode reward: [(0, '20.080')] [2025-02-13 16:47:16,085][10122] Saving new best policy, reward=20.180! [2025-02-13 16:47:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 2670592. Throughput: 0: 174.0. Samples: 670184. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:47:20,584][00196] Avg episode reward: [(0, '19.537')] [2025-02-13 16:47:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 2674688. Throughput: 0: 176.0. Samples: 671180. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:47:25,574][00196] Avg episode reward: [(0, '20.179')] [2025-02-13 16:47:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 2678784. Throughput: 0: 172.3. Samples: 671712. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:47:30,583][00196] Avg episode reward: [(0, '20.527')] [2025-02-13 16:47:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 2682880. Throughput: 0: 194.9. Samples: 673106. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:47:35,573][00196] Avg episode reward: [(0, '20.801')] [2025-02-13 16:47:36,066][10122] Saving new best policy, reward=20.527! [2025-02-13 16:47:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 791.4). Total num frames: 2686976. Throughput: 0: 207.7. Samples: 674314. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:47:40,572][00196] Avg episode reward: [(0, '20.648')] [2025-02-13 16:47:42,387][10122] Saving new best policy, reward=20.801! [2025-02-13 16:47:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 2691072. Throughput: 0: 197.0. Samples: 674676. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:47:45,578][00196] Avg episode reward: [(0, '20.739')] [2025-02-13 16:47:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.1, 300 sec: 791.4). Total num frames: 2695168. Throughput: 0: 204.7. Samples: 676258. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:47:50,577][00196] Avg episode reward: [(0, '20.451')] [2025-02-13 16:47:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2699264. Throughput: 0: 200.9. Samples: 677136. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:47:55,572][00196] Avg episode reward: [(0, '20.529')] [2025-02-13 16:47:57,841][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000660_2703360.pth... [2025-02-13 16:47:57,852][10139] Updated weights for policy 0, policy_version 660 (0.1132) [2025-02-13 16:47:57,988][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000613_2510848.pth [2025-02-13 16:48:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2703360. Throughput: 0: 194.3. Samples: 677530. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:48:00,572][00196] Avg episode reward: [(0, '20.405')] [2025-02-13 16:48:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 2707456. Throughput: 0: 201.0. Samples: 679230. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:48:05,573][00196] Avg episode reward: [(0, '20.439')] [2025-02-13 16:48:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2711552. Throughput: 0: 205.3. Samples: 680418. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:48:10,574][00196] Avg episode reward: [(0, '20.072')] [2025-02-13 16:48:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2715648. Throughput: 0: 198.1. Samples: 680628. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:48:15,577][00196] Avg episode reward: [(0, '19.612')] [2025-02-13 16:48:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2719744. Throughput: 0: 201.9. Samples: 682190. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:48:20,575][00196] Avg episode reward: [(0, '19.171')] [2025-02-13 16:48:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2723840. Throughput: 0: 204.1. Samples: 683500. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:48:25,578][00196] Avg episode reward: [(0, '18.691')] [2025-02-13 16:48:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2727936. Throughput: 0: 201.3. Samples: 683736. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:48:30,576][00196] Avg episode reward: [(0, '18.374')] [2025-02-13 16:48:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2732032. Throughput: 0: 199.2. Samples: 685224. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:48:35,575][00196] Avg episode reward: [(0, '18.176')] [2025-02-13 16:48:40,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2736128. Throughput: 0: 209.7. Samples: 686572. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:48:40,575][00196] Avg episode reward: [(0, '18.176')] [2025-02-13 16:48:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2740224. Throughput: 0: 203.3. Samples: 686680. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:48:45,579][00196] Avg episode reward: [(0, '18.341')] [2025-02-13 16:48:47,731][10139] Updated weights for policy 0, policy_version 670 (0.0556) [2025-02-13 16:48:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2744320. Throughput: 0: 200.4. Samples: 688248. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:48:50,578][00196] Avg episode reward: [(0, '18.988')] [2025-02-13 16:48:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2748416. Throughput: 0: 200.2. Samples: 689426. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:48:55,577][00196] Avg episode reward: [(0, '18.542')] [2025-02-13 16:49:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2752512. Throughput: 0: 201.6. Samples: 689698. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:49:00,582][00196] Avg episode reward: [(0, '18.162')] [2025-02-13 16:49:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2756608. Throughput: 0: 199.8. Samples: 691182. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:49:05,572][00196] Avg episode reward: [(0, '18.524')] [2025-02-13 16:49:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2760704. Throughput: 0: 203.2. Samples: 692646. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:49:10,572][00196] Avg episode reward: [(0, '18.768')] [2025-02-13 16:49:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2764800. Throughput: 0: 198.9. Samples: 692686. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:49:15,581][00196] Avg episode reward: [(0, '18.472')] [2025-02-13 16:49:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2768896. Throughput: 0: 191.5. Samples: 693840. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:49:20,577][00196] Avg episode reward: [(0, '18.346')] [2025-02-13 16:49:25,574][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 791.4). Total num frames: 2772992. Throughput: 0: 193.3. Samples: 695270. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:49:25,577][00196] Avg episode reward: [(0, '17.731')] [2025-02-13 16:49:30,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2777088. Throughput: 0: 201.6. Samples: 695754. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:49:30,573][00196] Avg episode reward: [(0, '17.747')] [2025-02-13 16:49:35,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2781184. Throughput: 0: 189.8. Samples: 696788. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:49:35,573][00196] Avg episode reward: [(0, '17.692')] [2025-02-13 16:49:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2781184. Throughput: 0: 187.5. Samples: 697862. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:49:40,572][00196] Avg episode reward: [(0, '17.594')] [2025-02-13 16:49:40,596][10139] Updated weights for policy 0, policy_version 680 (0.3077) [2025-02-13 16:49:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2785280. Throughput: 0: 193.7. Samples: 698416. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:49:45,578][00196] Avg episode reward: [(0, '17.188')] [2025-02-13 16:49:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2789376. Throughput: 0: 185.8. Samples: 699542. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:49:50,580][00196] Avg episode reward: [(0, '17.681')] [2025-02-13 16:49:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2793472. Throughput: 0: 183.5. Samples: 700902. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:49:55,576][00196] Avg episode reward: [(0, '17.478')] [2025-02-13 16:49:57,555][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000683_2797568.pth... [2025-02-13 16:49:57,731][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000637_2609152.pth [2025-02-13 16:50:00,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2797568. Throughput: 0: 186.6. Samples: 701082. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:50:00,579][00196] Avg episode reward: [(0, '17.394')] [2025-02-13 16:50:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2801664. Throughput: 0: 179.9. Samples: 701934. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:50:05,575][00196] Avg episode reward: [(0, '17.574')] [2025-02-13 16:50:10,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2805760. Throughput: 0: 179.3. Samples: 703338. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:50:10,572][00196] Avg episode reward: [(0, '17.839')] [2025-02-13 16:50:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2809856. Throughput: 0: 181.8. Samples: 703936. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:50:15,576][00196] Avg episode reward: [(0, '17.709')] [2025-02-13 16:50:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 763.7). Total num frames: 2809856. Throughput: 0: 181.5. Samples: 704954. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:50:20,580][00196] Avg episode reward: [(0, '18.488')] [2025-02-13 16:50:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 777.5). Total num frames: 2818048. Throughput: 0: 170.7. Samples: 705544. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:50:25,572][00196] Avg episode reward: [(0, '18.538')] [2025-02-13 16:50:30,570][00196] Fps is (10 sec: 1228.8, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2822144. Throughput: 0: 190.7. Samples: 706996. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:50:30,578][00196] Avg episode reward: [(0, '18.733')] [2025-02-13 16:50:35,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 763.7). Total num frames: 2822144. Throughput: 0: 185.2. Samples: 707876. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:50:35,579][00196] Avg episode reward: [(0, '18.488')] [2025-02-13 16:50:36,249][10139] Updated weights for policy 0, policy_version 690 (0.1128) [2025-02-13 16:50:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2830336. Throughput: 0: 182.1. Samples: 709096. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:50:40,577][00196] Avg episode reward: [(0, '18.151')] [2025-02-13 16:50:45,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2834432. Throughput: 0: 200.8. Samples: 710116. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:50:45,574][00196] Avg episode reward: [(0, '18.225')] [2025-02-13 16:50:50,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2834432. Throughput: 0: 201.8. Samples: 711014. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:50:50,572][00196] Avg episode reward: [(0, '17.779')] [2025-02-13 16:50:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2838528. Throughput: 0: 195.7. Samples: 712144. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:50:55,580][00196] Avg episode reward: [(0, '17.321')] [2025-02-13 16:51:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 763.7). Total num frames: 2842624. Throughput: 0: 200.6. Samples: 712962. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:51:00,572][00196] Avg episode reward: [(0, '17.275')] [2025-02-13 16:51:05,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2846720. Throughput: 0: 199.8. Samples: 713944. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:51:05,577][00196] Avg episode reward: [(0, '16.697')] [2025-02-13 16:51:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2850816. Throughput: 0: 214.9. Samples: 715214. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:51:10,572][00196] Avg episode reward: [(0, '16.652')] [2025-02-13 16:51:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2854912. Throughput: 0: 192.6. Samples: 715662. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:51:15,575][00196] Avg episode reward: [(0, '16.735')] [2025-02-13 16:51:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2859008. Throughput: 0: 203.6. Samples: 717040. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:51:20,572][00196] Avg episode reward: [(0, '17.012')] [2025-02-13 16:51:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2863104. Throughput: 0: 202.8. Samples: 718220. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:51:25,578][00196] Avg episode reward: [(0, '17.300')] [2025-02-13 16:51:27,452][10139] Updated weights for policy 0, policy_version 700 (0.3130) [2025-02-13 16:51:30,167][10122] Signal inference workers to stop experience collection... (700 times) [2025-02-13 16:51:30,238][10139] InferenceWorker_p0-w0: stopping experience collection (700 times) [2025-02-13 16:51:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 2867200. Throughput: 0: 190.3. Samples: 718680. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:51:30,573][00196] Avg episode reward: [(0, '17.529')] [2025-02-13 16:51:31,597][10122] Signal inference workers to resume experience collection... (700 times) [2025-02-13 16:51:31,600][10139] InferenceWorker_p0-w0: resuming experience collection (700 times) [2025-02-13 16:51:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 2871296. Throughput: 0: 195.9. Samples: 719830. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:51:35,573][00196] Avg episode reward: [(0, '17.816')] [2025-02-13 16:51:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2875392. Throughput: 0: 201.5. Samples: 721212. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:51:40,575][00196] Avg episode reward: [(0, '17.744')] [2025-02-13 16:51:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.6). Total num frames: 2879488. Throughput: 0: 193.4. Samples: 721666. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:51:45,578][00196] Avg episode reward: [(0, '18.398')] [2025-02-13 16:51:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2883584. Throughput: 0: 203.6. Samples: 723108. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:51:50,579][00196] Avg episode reward: [(0, '18.212')] [2025-02-13 16:51:52,907][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000705_2887680.pth... [2025-02-13 16:51:53,112][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000660_2703360.pth [2025-02-13 16:51:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2887680. Throughput: 0: 201.1. Samples: 724264. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:51:55,572][00196] Avg episode reward: [(0, '18.183')] [2025-02-13 16:52:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 2891776. Throughput: 0: 201.0. Samples: 724706. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:52:00,579][00196] Avg episode reward: [(0, '18.254')] [2025-02-13 16:52:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2895872. Throughput: 0: 203.2. Samples: 726182. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:52:05,574][00196] Avg episode reward: [(0, '17.380')] [2025-02-13 16:52:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2899968. Throughput: 0: 201.2. Samples: 727272. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:52:10,572][00196] Avg episode reward: [(0, '17.356')] [2025-02-13 16:52:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2904064. Throughput: 0: 199.3. Samples: 727648. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:52:15,575][00196] Avg episode reward: [(0, '17.197')] [2025-02-13 16:52:17,226][10139] Updated weights for policy 0, policy_version 710 (0.1030) [2025-02-13 16:52:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2908160. Throughput: 0: 209.3. Samples: 729250. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:52:20,576][00196] Avg episode reward: [(0, '16.890')] [2025-02-13 16:52:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2912256. Throughput: 0: 198.7. Samples: 730154. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:52:25,572][00196] Avg episode reward: [(0, '16.863')] [2025-02-13 16:52:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2916352. Throughput: 0: 201.4. Samples: 730730. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:52:30,581][00196] Avg episode reward: [(0, '16.846')] [2025-02-13 16:52:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2920448. Throughput: 0: 203.7. Samples: 732274. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:52:35,576][00196] Avg episode reward: [(0, '17.173')] [2025-02-13 16:52:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2924544. Throughput: 0: 198.6. Samples: 733202. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:52:40,574][00196] Avg episode reward: [(0, '17.247')] [2025-02-13 16:52:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2928640. Throughput: 0: 199.7. Samples: 733692. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:52:45,572][00196] Avg episode reward: [(0, '17.694')] [2025-02-13 16:52:50,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2932736. Throughput: 0: 201.0. Samples: 735228. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:52:50,584][00196] Avg episode reward: [(0, '17.960')] [2025-02-13 16:52:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2936832. Throughput: 0: 194.4. Samples: 736022. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:52:55,581][00196] Avg episode reward: [(0, '18.035')] [2025-02-13 16:53:00,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2940928. Throughput: 0: 202.0. Samples: 736738. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:53:00,571][00196] Avg episode reward: [(0, '18.661')] [2025-02-13 16:53:05,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2945024. Throughput: 0: 198.9. Samples: 738200. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:53:05,580][00196] Avg episode reward: [(0, '18.646')] [2025-02-13 16:53:09,064][10139] Updated weights for policy 0, policy_version 720 (0.1100) [2025-02-13 16:53:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2949120. Throughput: 0: 199.2. Samples: 739120. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:53:10,576][00196] Avg episode reward: [(0, '18.413')] [2025-02-13 16:53:15,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2953216. Throughput: 0: 202.1. Samples: 739826. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:53:15,572][00196] Avg episode reward: [(0, '18.354')] [2025-02-13 16:53:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2957312. Throughput: 0: 195.6. Samples: 741074. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:53:20,572][00196] Avg episode reward: [(0, '18.193')] [2025-02-13 16:53:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2961408. Throughput: 0: 198.9. Samples: 742152. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:53:25,575][00196] Avg episode reward: [(0, '18.273')] [2025-02-13 16:53:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2965504. Throughput: 0: 206.1. Samples: 742966. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:53:30,573][00196] Avg episode reward: [(0, '17.936')] [2025-02-13 16:53:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2969600. Throughput: 0: 195.8. Samples: 744038. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:53:35,573][00196] Avg episode reward: [(0, '17.493')] [2025-02-13 16:53:40,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2973696. Throughput: 0: 201.2. Samples: 745078. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:53:40,577][00196] Avg episode reward: [(0, '17.214')] [2025-02-13 16:53:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2977792. Throughput: 0: 205.0. Samples: 745964. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:53:45,572][00196] Avg episode reward: [(0, '17.040')] [2025-02-13 16:53:50,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2981888. Throughput: 0: 196.5. Samples: 747042. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 16:53:50,573][00196] Avg episode reward: [(0, '17.518')] [2025-02-13 16:53:54,960][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000729_2985984.pth... [2025-02-13 16:53:55,085][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000683_2797568.pth [2025-02-13 16:53:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2985984. Throughput: 0: 200.7. Samples: 748150. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:53:55,577][00196] Avg episode reward: [(0, '17.638')] [2025-02-13 16:54:00,575][00196] Fps is (10 sec: 409.4, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2985984. Throughput: 0: 200.9. Samples: 748866. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:54:00,581][00196] Avg episode reward: [(0, '18.484')] [2025-02-13 16:54:01,101][10139] Updated weights for policy 0, policy_version 730 (0.2811) [2025-02-13 16:54:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2994176. Throughput: 0: 199.9. Samples: 750070. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:54:05,571][00196] Avg episode reward: [(0, '18.421')] [2025-02-13 16:54:10,569][00196] Fps is (10 sec: 1229.5, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 2998272. Throughput: 0: 199.4. Samples: 751124. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:54:10,572][00196] Avg episode reward: [(0, '18.902')] [2025-02-13 16:54:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 2998272. Throughput: 0: 194.4. Samples: 751714. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:54:15,573][00196] Avg episode reward: [(0, '18.716')] [2025-02-13 16:54:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 3006464. Throughput: 0: 201.7. Samples: 753116. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:54:20,574][00196] Avg episode reward: [(0, '18.716')] [2025-02-13 16:54:25,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 3010560. Throughput: 0: 202.0. Samples: 754166. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:54:25,574][00196] Avg episode reward: [(0, '18.787')] [2025-02-13 16:54:30,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 3010560. Throughput: 0: 195.4. Samples: 754756. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:54:30,575][00196] Avg episode reward: [(0, '18.727')] [2025-02-13 16:54:35,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 3014656. Throughput: 0: 202.6. Samples: 756158. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:54:35,578][00196] Avg episode reward: [(0, '18.315')] [2025-02-13 16:54:40,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3022848. Throughput: 0: 200.8. Samples: 757188. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:54:40,576][00196] Avg episode reward: [(0, '18.700')] [2025-02-13 16:54:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 3022848. Throughput: 0: 199.8. Samples: 757858. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:54:45,580][00196] Avg episode reward: [(0, '18.102')] [2025-02-13 16:54:50,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 3026944. Throughput: 0: 203.6. Samples: 759234. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:54:50,572][00196] Avg episode reward: [(0, '19.295')] [2025-02-13 16:54:51,383][10139] Updated weights for policy 0, policy_version 740 (0.1648) [2025-02-13 16:54:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 3031040. Throughput: 0: 203.3. Samples: 760274. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:54:55,577][00196] Avg episode reward: [(0, '18.608')] [2025-02-13 16:55:00,574][00196] Fps is (10 sec: 818.9, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 3035136. Throughput: 0: 202.7. Samples: 760836. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:55:00,577][00196] Avg episode reward: [(0, '19.028')] [2025-02-13 16:55:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 3039232. Throughput: 0: 198.6. Samples: 762054. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:55:05,582][00196] Avg episode reward: [(0, '19.365')] [2025-02-13 16:55:10,569][00196] Fps is (10 sec: 819.5, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 3043328. Throughput: 0: 205.5. Samples: 763414. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:55:10,573][00196] Avg episode reward: [(0, '19.435')] [2025-02-13 16:55:15,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3047424. Throughput: 0: 199.9. Samples: 763750. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:55:15,578][00196] Avg episode reward: [(0, '19.254')] [2025-02-13 16:55:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 3051520. Throughput: 0: 200.4. Samples: 765178. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:55:20,572][00196] Avg episode reward: [(0, '19.163')] [2025-02-13 16:55:25,569][00196] Fps is (10 sec: 819.5, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 3055616. Throughput: 0: 205.5. Samples: 766434. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:55:25,575][00196] Avg episode reward: [(0, '19.193')] [2025-02-13 16:55:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3059712. Throughput: 0: 205.8. Samples: 767118. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:55:30,571][00196] Avg episode reward: [(0, '19.189')] [2025-02-13 16:55:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 3063808. Throughput: 0: 199.9. Samples: 768230. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:55:35,574][00196] Avg episode reward: [(0, '19.671')] [2025-02-13 16:55:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 3067904. Throughput: 0: 205.2. Samples: 769510. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:55:40,572][00196] Avg episode reward: [(0, '19.917')] [2025-02-13 16:55:40,917][10139] Updated weights for policy 0, policy_version 750 (0.1672) [2025-02-13 16:55:44,489][10122] Signal inference workers to stop experience collection... (750 times) [2025-02-13 16:55:44,616][10139] InferenceWorker_p0-w0: stopping experience collection (750 times) [2025-02-13 16:55:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3072000. Throughput: 0: 210.2. Samples: 770294. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:55:45,577][00196] Avg episode reward: [(0, '19.668')] [2025-02-13 16:55:46,802][10122] Signal inference workers to resume experience collection... (750 times) [2025-02-13 16:55:46,805][10139] InferenceWorker_p0-w0: resuming experience collection (750 times) [2025-02-13 16:55:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3076096. Throughput: 0: 208.0. Samples: 771412. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:55:50,572][00196] Avg episode reward: [(0, '19.184')] [2025-02-13 16:55:55,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3080192. Throughput: 0: 203.7. Samples: 772582. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:55:55,578][00196] Avg episode reward: [(0, '18.908')] [2025-02-13 16:55:55,994][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000753_3084288.pth... [2025-02-13 16:55:56,125][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000705_2887680.pth [2025-02-13 16:56:00,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 805.3). Total num frames: 3084288. Throughput: 0: 212.5. Samples: 773314. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:56:00,573][00196] Avg episode reward: [(0, '19.515')] [2025-02-13 16:56:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3088384. Throughput: 0: 206.2. Samples: 774456. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:56:05,578][00196] Avg episode reward: [(0, '19.446')] [2025-02-13 16:56:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3092480. Throughput: 0: 204.1. Samples: 775618. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:56:10,576][00196] Avg episode reward: [(0, '20.166')] [2025-02-13 16:56:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3096576. Throughput: 0: 208.2. Samples: 776486. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:56:15,572][00196] Avg episode reward: [(0, '20.221')] [2025-02-13 16:56:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3100672. Throughput: 0: 207.6. Samples: 777570. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:56:20,577][00196] Avg episode reward: [(0, '20.347')] [2025-02-13 16:56:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3104768. Throughput: 0: 204.4. Samples: 778708. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:56:25,579][00196] Avg episode reward: [(0, '20.203')] [2025-02-13 16:56:30,574][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 3108864. Throughput: 0: 205.4. Samples: 779536. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:56:30,577][00196] Avg episode reward: [(0, '20.128')] [2025-02-13 16:56:31,949][10139] Updated weights for policy 0, policy_version 760 (0.0539) [2025-02-13 16:56:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3112960. Throughput: 0: 203.8. Samples: 780582. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:56:35,572][00196] Avg episode reward: [(0, '20.086')] [2025-02-13 16:56:40,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3117056. Throughput: 0: 204.5. Samples: 781784. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:56:40,574][00196] Avg episode reward: [(0, '19.263')] [2025-02-13 16:56:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3121152. Throughput: 0: 209.8. Samples: 782754. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:56:45,571][00196] Avg episode reward: [(0, '19.091')] [2025-02-13 16:56:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3125248. Throughput: 0: 206.2. Samples: 783736. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:56:50,579][00196] Avg episode reward: [(0, '19.654')] [2025-02-13 16:56:55,569][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 819.2). Total num frames: 3133440. Throughput: 0: 205.8. Samples: 784880. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:56:55,579][00196] Avg episode reward: [(0, '19.195')] [2025-02-13 16:57:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3133440. Throughput: 0: 209.4. Samples: 785908. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:57:00,576][00196] Avg episode reward: [(0, '19.138')] [2025-02-13 16:57:05,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3137536. Throughput: 0: 206.1. Samples: 786846. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:57:05,573][00196] Avg episode reward: [(0, '18.761')] [2025-02-13 16:57:10,569][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 819.2). Total num frames: 3145728. Throughput: 0: 206.8. Samples: 788012. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:57:10,575][00196] Avg episode reward: [(0, '19.277')] [2025-02-13 16:57:15,572][00196] Fps is (10 sec: 1228.5, 60 sec: 887.4, 300 sec: 819.2). Total num frames: 3149824. Throughput: 0: 209.2. Samples: 788948. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:57:15,578][00196] Avg episode reward: [(0, '19.577')] [2025-02-13 16:57:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3149824. Throughput: 0: 206.2. Samples: 789860. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:57:20,573][00196] Avg episode reward: [(0, '19.752')] [2025-02-13 16:57:21,081][10139] Updated weights for policy 0, policy_version 770 (0.0057) [2025-02-13 16:57:25,569][00196] Fps is (10 sec: 409.7, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3153920. Throughput: 0: 205.6. Samples: 791034. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:57:25,582][00196] Avg episode reward: [(0, '19.453')] [2025-02-13 16:57:30,569][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 819.2). Total num frames: 3162112. Throughput: 0: 205.6. Samples: 792006. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:57:30,579][00196] Avg episode reward: [(0, '19.372')] [2025-02-13 16:57:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3162112. Throughput: 0: 202.7. Samples: 792858. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:57:35,572][00196] Avg episode reward: [(0, '19.385')] [2025-02-13 16:57:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3166208. Throughput: 0: 204.4. Samples: 794076. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:57:40,577][00196] Avg episode reward: [(0, '19.438')] [2025-02-13 16:57:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3170304. Throughput: 0: 200.9. Samples: 794948. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:57:45,572][00196] Avg episode reward: [(0, '19.564')] [2025-02-13 16:57:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3174400. Throughput: 0: 201.6. Samples: 795920. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:57:50,584][00196] Avg episode reward: [(0, '19.807')] [2025-02-13 16:57:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3178496. Throughput: 0: 204.2. Samples: 797200. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:57:55,574][00196] Avg episode reward: [(0, '19.341')] [2025-02-13 16:57:56,357][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000777_3182592.pth... [2025-02-13 16:57:56,483][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000729_2985984.pth [2025-02-13 16:58:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3182592. Throughput: 0: 199.4. Samples: 797920. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:58:00,573][00196] Avg episode reward: [(0, '19.646')] [2025-02-13 16:58:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3186688. Throughput: 0: 203.6. Samples: 799020. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:58:05,574][00196] Avg episode reward: [(0, '20.011')] [2025-02-13 16:58:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3190784. Throughput: 0: 204.0. Samples: 800214. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:58:10,573][00196] Avg episode reward: [(0, '19.358')] [2025-02-13 16:58:11,550][10139] Updated weights for policy 0, policy_version 780 (0.1108) [2025-02-13 16:58:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 805.3). Total num frames: 3194880. Throughput: 0: 196.0. Samples: 800828. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:58:15,582][00196] Avg episode reward: [(0, '20.072')] [2025-02-13 16:58:20,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3198976. Throughput: 0: 203.4. Samples: 802010. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:58:20,577][00196] Avg episode reward: [(0, '20.449')] [2025-02-13 16:58:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3203072. Throughput: 0: 205.2. Samples: 803308. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:58:25,575][00196] Avg episode reward: [(0, '20.812')] [2025-02-13 16:58:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3207168. Throughput: 0: 197.0. Samples: 803814. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:58:30,573][00196] Avg episode reward: [(0, '21.465')] [2025-02-13 16:58:31,417][10122] Saving new best policy, reward=20.812! [2025-02-13 16:58:35,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3211264. Throughput: 0: 201.9. Samples: 805004. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:58:35,576][00196] Avg episode reward: [(0, '21.281')] [2025-02-13 16:58:37,882][10122] Saving new best policy, reward=21.465! [2025-02-13 16:58:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3215360. Throughput: 0: 201.6. Samples: 806274. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:58:40,583][00196] Avg episode reward: [(0, '20.960')] [2025-02-13 16:58:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3219456. Throughput: 0: 192.0. Samples: 806562. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 16:58:45,573][00196] Avg episode reward: [(0, '20.938')] [2025-02-13 16:58:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3223552. Throughput: 0: 202.0. Samples: 808110. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:58:50,573][00196] Avg episode reward: [(0, '21.194')] [2025-02-13 16:58:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3227648. Throughput: 0: 199.5. Samples: 809190. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:58:55,572][00196] Avg episode reward: [(0, '20.478')] [2025-02-13 16:59:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3231744. Throughput: 0: 200.4. Samples: 809846. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:59:00,577][00196] Avg episode reward: [(0, '20.745')] [2025-02-13 16:59:01,731][10139] Updated weights for policy 0, policy_version 790 (0.1013) [2025-02-13 16:59:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3235840. Throughput: 0: 205.0. Samples: 811236. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 16:59:05,574][00196] Avg episode reward: [(0, '21.158')] [2025-02-13 16:59:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3239936. Throughput: 0: 201.9. Samples: 812394. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:59:10,572][00196] Avg episode reward: [(0, '20.874')] [2025-02-13 16:59:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3244032. Throughput: 0: 201.7. Samples: 812890. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:59:15,578][00196] Avg episode reward: [(0, '20.651')] [2025-02-13 16:59:20,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3248128. Throughput: 0: 205.6. Samples: 814258. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:59:20,579][00196] Avg episode reward: [(0, '21.105')] [2025-02-13 16:59:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3252224. Throughput: 0: 205.7. Samples: 815530. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 16:59:25,573][00196] Avg episode reward: [(0, '21.043')] [2025-02-13 16:59:30,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3256320. Throughput: 0: 210.4. Samples: 816032. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:59:30,572][00196] Avg episode reward: [(0, '21.403')] [2025-02-13 16:59:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3260416. Throughput: 0: 207.8. Samples: 817462. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:59:35,577][00196] Avg episode reward: [(0, '21.320')] [2025-02-13 16:59:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3264512. Throughput: 0: 203.7. Samples: 818358. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:59:40,577][00196] Avg episode reward: [(0, '21.614')] [2025-02-13 16:59:42,821][10122] Saving new best policy, reward=21.614! [2025-02-13 16:59:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3268608. Throughput: 0: 201.2. Samples: 818898. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:59:45,573][00196] Avg episode reward: [(0, '21.731')] [2025-02-13 16:59:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3272704. Throughput: 0: 206.0. Samples: 820506. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:59:50,572][00196] Avg episode reward: [(0, '21.805')] [2025-02-13 16:59:52,831][10122] Saving new best policy, reward=21.731! [2025-02-13 16:59:52,829][10139] Updated weights for policy 0, policy_version 800 (0.0566) [2025-02-13 16:59:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3276800. Throughput: 0: 203.5. Samples: 821552. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 16:59:55,572][00196] Avg episode reward: [(0, '21.759')] [2025-02-13 16:59:55,854][10122] Signal inference workers to stop experience collection... (800 times) [2025-02-13 16:59:55,899][10139] InferenceWorker_p0-w0: stopping experience collection (800 times) [2025-02-13 16:59:57,632][10122] Signal inference workers to resume experience collection... (800 times) [2025-02-13 16:59:57,634][10139] InferenceWorker_p0-w0: resuming experience collection (800 times) [2025-02-13 16:59:57,643][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000801_3280896.pth... [2025-02-13 16:59:57,764][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000753_3084288.pth [2025-02-13 16:59:57,780][10122] Saving new best policy, reward=21.805! [2025-02-13 17:00:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3280896. Throughput: 0: 203.0. Samples: 822024. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:00:00,579][00196] Avg episode reward: [(0, '20.988')] [2025-02-13 17:00:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3284992. Throughput: 0: 203.5. Samples: 823416. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:00:05,576][00196] Avg episode reward: [(0, '21.125')] [2025-02-13 17:00:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3289088. Throughput: 0: 194.5. Samples: 824282. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:00:10,573][00196] Avg episode reward: [(0, '20.569')] [2025-02-13 17:00:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3293184. Throughput: 0: 194.8. Samples: 824800. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:00:15,574][00196] Avg episode reward: [(0, '20.526')] [2025-02-13 17:00:20,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3297280. Throughput: 0: 199.0. Samples: 826416. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:00:20,587][00196] Avg episode reward: [(0, '20.336')] [2025-02-13 17:00:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3301376. Throughput: 0: 197.3. Samples: 827236. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:00:25,573][00196] Avg episode reward: [(0, '20.336')] [2025-02-13 17:00:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3305472. Throughput: 0: 199.9. Samples: 827892. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:00:30,582][00196] Avg episode reward: [(0, '20.143')] [2025-02-13 17:00:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3309568. Throughput: 0: 201.2. Samples: 829560. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:00:35,578][00196] Avg episode reward: [(0, '20.096')] [2025-02-13 17:00:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3313664. Throughput: 0: 196.6. Samples: 830398. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:00:40,572][00196] Avg episode reward: [(0, '20.555')] [2025-02-13 17:00:43,631][10139] Updated weights for policy 0, policy_version 810 (0.1181) [2025-02-13 17:00:45,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3317760. Throughput: 0: 198.6. Samples: 830962. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:00:45,578][00196] Avg episode reward: [(0, '20.759')] [2025-02-13 17:00:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3321856. Throughput: 0: 206.1. Samples: 832692. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:00:50,573][00196] Avg episode reward: [(0, '20.735')] [2025-02-13 17:00:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3325952. Throughput: 0: 205.9. Samples: 833546. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:00:55,572][00196] Avg episode reward: [(0, '21.185')] [2025-02-13 17:01:00,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3330048. Throughput: 0: 207.8. Samples: 834152. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:01:00,576][00196] Avg episode reward: [(0, '21.267')] [2025-02-13 17:01:05,576][00196] Fps is (10 sec: 818.6, 60 sec: 819.1, 300 sec: 819.2). Total num frames: 3334144. Throughput: 0: 209.8. Samples: 835860. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:01:05,591][00196] Avg episode reward: [(0, '21.356')] [2025-02-13 17:01:10,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3338240. Throughput: 0: 205.1. Samples: 836464. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:01:10,575][00196] Avg episode reward: [(0, '21.215')] [2025-02-13 17:01:15,569][00196] Fps is (10 sec: 819.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3342336. Throughput: 0: 202.9. Samples: 837022. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:01:15,573][00196] Avg episode reward: [(0, '20.937')] [2025-02-13 17:01:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3346432. Throughput: 0: 199.7. Samples: 838546. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:01:20,572][00196] Avg episode reward: [(0, '20.976')] [2025-02-13 17:01:25,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3350528. Throughput: 0: 205.7. Samples: 839656. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:01:25,577][00196] Avg episode reward: [(0, '21.192')] [2025-02-13 17:01:30,575][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 819.2). Total num frames: 3354624. Throughput: 0: 204.1. Samples: 840146. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:01:30,578][00196] Avg episode reward: [(0, '20.753')] [2025-02-13 17:01:33,056][10139] Updated weights for policy 0, policy_version 820 (0.0062) [2025-02-13 17:01:35,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3358720. Throughput: 0: 199.8. Samples: 841682. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:01:35,573][00196] Avg episode reward: [(0, '20.531')] [2025-02-13 17:01:40,575][00196] Fps is (10 sec: 819.2, 60 sec: 819.1, 300 sec: 819.2). Total num frames: 3362816. Throughput: 0: 205.5. Samples: 842796. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:01:40,585][00196] Avg episode reward: [(0, '20.146')] [2025-02-13 17:01:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3366912. Throughput: 0: 201.5. Samples: 843220. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:01:45,573][00196] Avg episode reward: [(0, '20.758')] [2025-02-13 17:01:50,570][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3371008. Throughput: 0: 192.5. Samples: 844522. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:01:50,579][00196] Avg episode reward: [(0, '20.739')] [2025-02-13 17:01:52,588][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000824_3375104.pth... [2025-02-13 17:01:52,708][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000777_3182592.pth [2025-02-13 17:01:55,573][00196] Fps is (10 sec: 818.9, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3375104. Throughput: 0: 208.1. Samples: 845828. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:01:55,578][00196] Avg episode reward: [(0, '20.611')] [2025-02-13 17:02:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3379200. Throughput: 0: 206.2. Samples: 846302. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:02:00,573][00196] Avg episode reward: [(0, '20.344')] [2025-02-13 17:02:05,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.3, 300 sec: 805.3). Total num frames: 3383296. Throughput: 0: 199.2. Samples: 847510. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:02:05,572][00196] Avg episode reward: [(0, '20.620')] [2025-02-13 17:02:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3387392. Throughput: 0: 208.0. Samples: 849014. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:02:10,572][00196] Avg episode reward: [(0, '20.956')] [2025-02-13 17:02:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3391488. Throughput: 0: 204.6. Samples: 849350. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:02:15,574][00196] Avg episode reward: [(0, '21.367')] [2025-02-13 17:02:20,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3395584. Throughput: 0: 194.0. Samples: 850410. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:02:20,572][00196] Avg episode reward: [(0, '20.657')] [2025-02-13 17:02:23,235][10139] Updated weights for policy 0, policy_version 830 (0.2173) [2025-02-13 17:02:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3399680. Throughput: 0: 207.1. Samples: 852116. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:02:25,576][00196] Avg episode reward: [(0, '20.733')] [2025-02-13 17:02:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 819.2). Total num frames: 3403776. Throughput: 0: 205.5. Samples: 852468. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:02:30,573][00196] Avg episode reward: [(0, '20.733')] [2025-02-13 17:02:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3407872. Throughput: 0: 200.9. Samples: 853562. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:02:35,572][00196] Avg episode reward: [(0, '20.541')] [2025-02-13 17:02:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 819.2). Total num frames: 3411968. Throughput: 0: 207.3. Samples: 855158. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:02:40,571][00196] Avg episode reward: [(0, '20.319')] [2025-02-13 17:02:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3416064. Throughput: 0: 205.2. Samples: 855534. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:02:45,581][00196] Avg episode reward: [(0, '20.161')] [2025-02-13 17:02:50,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3420160. Throughput: 0: 205.0. Samples: 856734. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:02:50,582][00196] Avg episode reward: [(0, '20.659')] [2025-02-13 17:02:55,574][00196] Fps is (10 sec: 818.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3424256. Throughput: 0: 199.9. Samples: 858012. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:02:55,577][00196] Avg episode reward: [(0, '20.549')] [2025-02-13 17:03:00,570][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3428352. Throughput: 0: 204.2. Samples: 858540. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:03:00,575][00196] Avg episode reward: [(0, '20.343')] [2025-02-13 17:03:05,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3432448. Throughput: 0: 204.8. Samples: 859626. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:03:05,580][00196] Avg episode reward: [(0, '20.172')] [2025-02-13 17:03:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3436544. Throughput: 0: 193.3. Samples: 860814. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:03:10,576][00196] Avg episode reward: [(0, '19.938')] [2025-02-13 17:03:15,570][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3436544. Throughput: 0: 200.4. Samples: 861486. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:03:15,573][00196] Avg episode reward: [(0, '20.474')] [2025-02-13 17:03:16,780][10139] Updated weights for policy 0, policy_version 840 (0.1232) [2025-02-13 17:03:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3440640. Throughput: 0: 204.8. Samples: 862778. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:03:20,583][00196] Avg episode reward: [(0, '20.451')] [2025-02-13 17:03:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3444736. Throughput: 0: 191.6. Samples: 863778. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:03:25,581][00196] Avg episode reward: [(0, '20.673')] [2025-02-13 17:03:30,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3448832. Throughput: 0: 197.2. Samples: 864410. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:03:30,586][00196] Avg episode reward: [(0, '21.270')] [2025-02-13 17:03:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3452928. Throughput: 0: 193.7. Samples: 865452. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:03:35,574][00196] Avg episode reward: [(0, '22.023')] [2025-02-13 17:03:40,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3457024. Throughput: 0: 195.4. Samples: 866802. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:03:40,576][00196] Avg episode reward: [(0, '21.813')] [2025-02-13 17:03:41,469][10122] Saving new best policy, reward=22.023! [2025-02-13 17:03:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3461120. Throughput: 0: 194.8. Samples: 867306. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:03:45,575][00196] Avg episode reward: [(0, '21.437')] [2025-02-13 17:03:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 805.3). Total num frames: 3465216. Throughput: 0: 192.1. Samples: 868272. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:03:50,573][00196] Avg episode reward: [(0, '21.131')] [2025-02-13 17:03:52,676][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000847_3469312.pth... [2025-02-13 17:03:52,792][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000801_3280896.pth [2025-02-13 17:03:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 805.3). Total num frames: 3469312. Throughput: 0: 201.2. Samples: 869868. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:03:55,578][00196] Avg episode reward: [(0, '21.031')] [2025-02-13 17:04:00,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3473408. Throughput: 0: 192.2. Samples: 870134. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:04:00,579][00196] Avg episode reward: [(0, '21.156')] [2025-02-13 17:04:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3477504. Throughput: 0: 182.3. Samples: 870980. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:04:05,582][00196] Avg episode reward: [(0, '21.987')] [2025-02-13 17:04:09,292][10139] Updated weights for policy 0, policy_version 850 (0.1034) [2025-02-13 17:04:10,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3481600. Throughput: 0: 191.1. Samples: 872378. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:04:10,573][00196] Avg episode reward: [(0, '21.871')] [2025-02-13 17:04:11,866][10122] Signal inference workers to stop experience collection... (850 times) [2025-02-13 17:04:11,931][10139] InferenceWorker_p0-w0: stopping experience collection (850 times) [2025-02-13 17:04:13,239][10122] Signal inference workers to resume experience collection... (850 times) [2025-02-13 17:04:13,241][10139] InferenceWorker_p0-w0: resuming experience collection (850 times) [2025-02-13 17:04:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3485696. Throughput: 0: 191.1. Samples: 873010. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:04:15,576][00196] Avg episode reward: [(0, '21.318')] [2025-02-13 17:04:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3489792. Throughput: 0: 190.2. Samples: 874010. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:04:20,572][00196] Avg episode reward: [(0, '20.873')] [2025-02-13 17:04:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3493888. Throughput: 0: 191.5. Samples: 875418. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:04:25,572][00196] Avg episode reward: [(0, '21.021')] [2025-02-13 17:04:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3497984. Throughput: 0: 194.6. Samples: 876064. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:04:30,576][00196] Avg episode reward: [(0, '21.005')] [2025-02-13 17:04:35,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 3497984. Throughput: 0: 195.5. Samples: 877068. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:04:35,577][00196] Avg episode reward: [(0, '21.297')] [2025-02-13 17:04:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3506176. Throughput: 0: 186.4. Samples: 878258. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:04:40,572][00196] Avg episode reward: [(0, '21.158')] [2025-02-13 17:04:45,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3510272. Throughput: 0: 199.3. Samples: 879102. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:04:45,571][00196] Avg episode reward: [(0, '21.534')] [2025-02-13 17:04:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3514368. Throughput: 0: 203.6. Samples: 880144. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:04:50,573][00196] Avg episode reward: [(0, '21.011')] [2025-02-13 17:04:55,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3518464. Throughput: 0: 198.6. Samples: 881316. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:04:55,583][00196] Avg episode reward: [(0, '20.580')] [2025-02-13 17:04:59,171][10139] Updated weights for policy 0, policy_version 860 (0.1601) [2025-02-13 17:05:00,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3522560. Throughput: 0: 204.0. Samples: 882188. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:05:00,573][00196] Avg episode reward: [(0, '21.065')] [2025-02-13 17:05:05,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3526656. Throughput: 0: 204.3. Samples: 883202. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:05:05,574][00196] Avg episode reward: [(0, '20.714')] [2025-02-13 17:05:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3530752. Throughput: 0: 195.2. Samples: 884202. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:05:10,573][00196] Avg episode reward: [(0, '21.078')] [2025-02-13 17:05:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3534848. Throughput: 0: 204.0. Samples: 885242. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:05:15,573][00196] Avg episode reward: [(0, '21.302')] [2025-02-13 17:05:20,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3538944. Throughput: 0: 203.9. Samples: 886242. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:05:20,574][00196] Avg episode reward: [(0, '21.578')] [2025-02-13 17:05:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3543040. Throughput: 0: 203.6. Samples: 887420. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:05:25,578][00196] Avg episode reward: [(0, '21.779')] [2025-02-13 17:05:30,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3547136. Throughput: 0: 204.5. Samples: 888304. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:05:30,576][00196] Avg episode reward: [(0, '21.820')] [2025-02-13 17:05:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 805.3). Total num frames: 3551232. Throughput: 0: 206.6. Samples: 889440. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:05:35,571][00196] Avg episode reward: [(0, '21.759')] [2025-02-13 17:05:40,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3555328. Throughput: 0: 207.2. Samples: 890640. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:05:40,572][00196] Avg episode reward: [(0, '21.663')] [2025-02-13 17:05:45,573][00196] Fps is (10 sec: 818.9, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 3559424. Throughput: 0: 204.8. Samples: 891406. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:05:45,577][00196] Avg episode reward: [(0, '21.467')] [2025-02-13 17:05:48,569][10139] Updated weights for policy 0, policy_version 870 (0.1225) [2025-02-13 17:05:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3563520. Throughput: 0: 209.0. Samples: 892608. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:05:50,573][00196] Avg episode reward: [(0, '22.138')] [2025-02-13 17:05:54,383][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000871_3567616.pth... [2025-02-13 17:05:54,509][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000824_3375104.pth [2025-02-13 17:05:54,533][10122] Saving new best policy, reward=22.138! [2025-02-13 17:05:55,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3567616. Throughput: 0: 212.8. Samples: 893780. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:05:55,574][00196] Avg episode reward: [(0, '22.675')] [2025-02-13 17:05:59,054][10122] Saving new best policy, reward=22.675! [2025-02-13 17:06:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3571712. Throughput: 0: 205.3. Samples: 894480. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:06:00,573][00196] Avg episode reward: [(0, '22.751')] [2025-02-13 17:06:03,455][10122] Saving new best policy, reward=22.751! [2025-02-13 17:06:05,574][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 3575808. Throughput: 0: 213.2. Samples: 895836. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:06:05,586][00196] Avg episode reward: [(0, '22.707')] [2025-02-13 17:06:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3579904. Throughput: 0: 208.4. Samples: 896796. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:06:10,573][00196] Avg episode reward: [(0, '23.101')] [2025-02-13 17:06:14,181][10122] Saving new best policy, reward=23.101! [2025-02-13 17:06:15,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3584000. Throughput: 0: 205.9. Samples: 897568. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:06:15,572][00196] Avg episode reward: [(0, '22.976')] [2025-02-13 17:06:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3588096. Throughput: 0: 207.4. Samples: 898774. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:06:20,577][00196] Avg episode reward: [(0, '23.490')] [2025-02-13 17:06:24,273][10122] Saving new best policy, reward=23.490! [2025-02-13 17:06:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3592192. Throughput: 0: 205.4. Samples: 899884. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:06:25,582][00196] Avg episode reward: [(0, '23.677')] [2025-02-13 17:06:29,019][10122] Saving new best policy, reward=23.677! [2025-02-13 17:06:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3596288. Throughput: 0: 204.8. Samples: 900620. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:06:30,573][00196] Avg episode reward: [(0, '24.033')] [2025-02-13 17:06:33,648][10122] Saving new best policy, reward=24.033! [2025-02-13 17:06:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3600384. Throughput: 0: 206.8. Samples: 901916. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:06:35,577][00196] Avg episode reward: [(0, '24.631')] [2025-02-13 17:06:39,548][10122] Saving new best policy, reward=24.631! [2025-02-13 17:06:39,560][10139] Updated weights for policy 0, policy_version 880 (0.0560) [2025-02-13 17:06:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3604480. Throughput: 0: 200.2. Samples: 902790. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:06:40,572][00196] Avg episode reward: [(0, '24.631')] [2025-02-13 17:06:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 805.3). Total num frames: 3608576. Throughput: 0: 204.3. Samples: 903672. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:06:45,572][00196] Avg episode reward: [(0, '24.330')] [2025-02-13 17:06:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3612672. Throughput: 0: 206.6. Samples: 905134. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:06:50,574][00196] Avg episode reward: [(0, '23.871')] [2025-02-13 17:06:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3616768. Throughput: 0: 207.1. Samples: 906114. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:06:55,576][00196] Avg episode reward: [(0, '24.402')] [2025-02-13 17:07:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3620864. Throughput: 0: 204.8. Samples: 906784. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:07:00,571][00196] Avg episode reward: [(0, '24.714')] [2025-02-13 17:07:03,720][10122] Saving new best policy, reward=24.714! [2025-02-13 17:07:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 805.3). Total num frames: 3624960. Throughput: 0: 206.2. Samples: 908054. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:07:05,580][00196] Avg episode reward: [(0, '25.201')] [2025-02-13 17:07:09,606][10122] Saving new best policy, reward=25.201! [2025-02-13 17:07:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3629056. Throughput: 0: 203.1. Samples: 909024. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:07:10,575][00196] Avg episode reward: [(0, '25.136')] [2025-02-13 17:07:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3633152. Throughput: 0: 205.6. Samples: 909870. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:07:15,572][00196] Avg episode reward: [(0, '25.349')] [2025-02-13 17:07:18,755][10122] Saving new best policy, reward=25.349! [2025-02-13 17:07:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3637248. Throughput: 0: 204.4. Samples: 911116. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:07:20,574][00196] Avg episode reward: [(0, '25.475')] [2025-02-13 17:07:24,745][10122] Saving new best policy, reward=25.475! [2025-02-13 17:07:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3641344. Throughput: 0: 205.1. Samples: 912018. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:07:25,576][00196] Avg episode reward: [(0, '25.175')] [2025-02-13 17:07:30,130][10139] Updated weights for policy 0, policy_version 890 (0.0640) [2025-02-13 17:07:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3645440. Throughput: 0: 205.4. Samples: 912916. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:07:30,572][00196] Avg episode reward: [(0, '24.735')] [2025-02-13 17:07:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3649536. Throughput: 0: 195.9. Samples: 913948. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:07:35,581][00196] Avg episode reward: [(0, '24.269')] [2025-02-13 17:07:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 3649536. Throughput: 0: 197.0. Samples: 914980. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:07:40,578][00196] Avg episode reward: [(0, '24.325')] [2025-02-13 17:07:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3657728. Throughput: 0: 200.6. Samples: 915810. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:07:45,575][00196] Avg episode reward: [(0, '23.870')] [2025-02-13 17:07:50,572][00196] Fps is (10 sec: 1228.5, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3661824. Throughput: 0: 198.9. Samples: 917006. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:07:50,575][00196] Avg episode reward: [(0, '23.894')] [2025-02-13 17:07:55,225][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000895_3665920.pth... [2025-02-13 17:07:55,354][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000847_3469312.pth [2025-02-13 17:07:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3665920. Throughput: 0: 200.8. Samples: 918062. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:07:55,572][00196] Avg episode reward: [(0, '23.622')] [2025-02-13 17:08:00,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3670016. Throughput: 0: 203.0. Samples: 919004. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:08:00,572][00196] Avg episode reward: [(0, '22.699')] [2025-02-13 17:08:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3674112. Throughput: 0: 199.2. Samples: 920078. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:08:05,573][00196] Avg episode reward: [(0, '22.434')] [2025-02-13 17:08:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3678208. Throughput: 0: 203.3. Samples: 921168. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:08:10,574][00196] Avg episode reward: [(0, '22.581')] [2025-02-13 17:08:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3682304. Throughput: 0: 204.3. Samples: 922110. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:08:15,573][00196] Avg episode reward: [(0, '22.337')] [2025-02-13 17:08:19,328][10139] Updated weights for policy 0, policy_version 900 (0.1282) [2025-02-13 17:08:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3686400. Throughput: 0: 205.0. Samples: 923174. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:08:20,572][00196] Avg episode reward: [(0, '21.649')] [2025-02-13 17:08:22,102][10122] Signal inference workers to stop experience collection... (900 times) [2025-02-13 17:08:22,212][10139] InferenceWorker_p0-w0: stopping experience collection (900 times) [2025-02-13 17:08:24,647][10122] Signal inference workers to resume experience collection... (900 times) [2025-02-13 17:08:24,647][10139] InferenceWorker_p0-w0: resuming experience collection (900 times) [2025-02-13 17:08:25,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3690496. Throughput: 0: 207.5. Samples: 924318. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:08:25,575][00196] Avg episode reward: [(0, '21.916')] [2025-02-13 17:08:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3694592. Throughput: 0: 207.5. Samples: 925148. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:08:30,574][00196] Avg episode reward: [(0, '21.924')] [2025-02-13 17:08:35,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3698688. Throughput: 0: 204.2. Samples: 926196. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:08:35,572][00196] Avg episode reward: [(0, '22.370')] [2025-02-13 17:08:40,570][00196] Fps is (10 sec: 819.1, 60 sec: 887.4, 300 sec: 819.2). Total num frames: 3702784. Throughput: 0: 206.6. Samples: 927358. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:08:40,576][00196] Avg episode reward: [(0, '22.547')] [2025-02-13 17:08:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3706880. Throughput: 0: 203.8. Samples: 928174. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:08:45,574][00196] Avg episode reward: [(0, '22.317')] [2025-02-13 17:08:50,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3710976. Throughput: 0: 207.2. Samples: 929404. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:08:50,573][00196] Avg episode reward: [(0, '22.509')] [2025-02-13 17:08:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3715072. Throughput: 0: 209.7. Samples: 930606. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:08:55,583][00196] Avg episode reward: [(0, '22.509')] [2025-02-13 17:09:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3719168. Throughput: 0: 205.5. Samples: 931356. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:09:00,583][00196] Avg episode reward: [(0, '21.879')] [2025-02-13 17:09:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3723264. Throughput: 0: 204.5. Samples: 932378. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:09:05,575][00196] Avg episode reward: [(0, '21.684')] [2025-02-13 17:09:08,811][10139] Updated weights for policy 0, policy_version 910 (0.1149) [2025-02-13 17:09:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3727360. Throughput: 0: 209.4. Samples: 933740. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:09:10,576][00196] Avg episode reward: [(0, '21.619')] [2025-02-13 17:09:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3731456. Throughput: 0: 202.9. Samples: 934280. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:09:15,572][00196] Avg episode reward: [(0, '21.388')] [2025-02-13 17:09:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3735552. Throughput: 0: 204.4. Samples: 935392. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:09:20,579][00196] Avg episode reward: [(0, '21.518')] [2025-02-13 17:09:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3739648. Throughput: 0: 207.5. Samples: 936694. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:09:25,574][00196] Avg episode reward: [(0, '21.273')] [2025-02-13 17:09:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 833.1). Total num frames: 3743744. Throughput: 0: 202.8. Samples: 937298. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:09:30,571][00196] Avg episode reward: [(0, '21.703')] [2025-02-13 17:09:35,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3747840. Throughput: 0: 201.8. Samples: 938486. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:09:35,573][00196] Avg episode reward: [(0, '21.711')] [2025-02-13 17:09:40,575][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 819.2). Total num frames: 3751936. Throughput: 0: 205.1. Samples: 939836. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:09:40,583][00196] Avg episode reward: [(0, '21.609')] [2025-02-13 17:09:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3756032. Throughput: 0: 202.7. Samples: 940476. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:09:45,578][00196] Avg episode reward: [(0, '21.996')] [2025-02-13 17:09:50,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3760128. Throughput: 0: 202.5. Samples: 941490. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:09:50,580][00196] Avg episode reward: [(0, '21.260')] [2025-02-13 17:09:54,020][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000919_3764224.pth... [2025-02-13 17:09:54,144][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000871_3567616.pth [2025-02-13 17:09:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3764224. Throughput: 0: 204.4. Samples: 942936. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:09:55,573][00196] Avg episode reward: [(0, '21.542')] [2025-02-13 17:10:00,546][10139] Updated weights for policy 0, policy_version 920 (0.1028) [2025-02-13 17:10:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3768320. Throughput: 0: 205.3. Samples: 943520. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:00,580][00196] Avg episode reward: [(0, '21.979')] [2025-02-13 17:10:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3772416. Throughput: 0: 202.9. Samples: 944524. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:05,571][00196] Avg episode reward: [(0, '21.807')] [2025-02-13 17:10:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3776512. Throughput: 0: 202.9. Samples: 945824. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:10,574][00196] Avg episode reward: [(0, '22.435')] [2025-02-13 17:10:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3776512. Throughput: 0: 203.2. Samples: 946440. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:15,572][00196] Avg episode reward: [(0, '22.594')] [2025-02-13 17:10:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3784704. Throughput: 0: 202.5. Samples: 947600. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:20,578][00196] Avg episode reward: [(0, '22.995')] [2025-02-13 17:10:25,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3788800. Throughput: 0: 202.5. Samples: 948948. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:25,572][00196] Avg episode reward: [(0, '23.465')] [2025-02-13 17:10:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3792896. Throughput: 0: 204.4. Samples: 949676. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:30,576][00196] Avg episode reward: [(0, '23.599')] [2025-02-13 17:10:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3796992. Throughput: 0: 204.0. Samples: 950672. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:35,581][00196] Avg episode reward: [(0, '23.891')] [2025-02-13 17:10:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 819.2). Total num frames: 3801088. Throughput: 0: 204.0. Samples: 952118. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:40,573][00196] Avg episode reward: [(0, '22.166')] [2025-02-13 17:10:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3805184. Throughput: 0: 205.6. Samples: 952772. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:45,573][00196] Avg episode reward: [(0, '22.419')] [2025-02-13 17:10:50,060][10139] Updated weights for policy 0, policy_version 930 (0.1063) [2025-02-13 17:10:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3809280. Throughput: 0: 206.3. Samples: 953806. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:50,571][00196] Avg episode reward: [(0, '22.168')] [2025-02-13 17:10:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3813376. Throughput: 0: 210.1. Samples: 955278. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:10:55,576][00196] Avg episode reward: [(0, '21.305')] [2025-02-13 17:11:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3817472. Throughput: 0: 210.3. Samples: 955904. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:11:00,573][00196] Avg episode reward: [(0, '21.569')] [2025-02-13 17:11:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3821568. Throughput: 0: 206.8. Samples: 956904. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:11:05,572][00196] Avg episode reward: [(0, '21.291')] [2025-02-13 17:11:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3825664. Throughput: 0: 212.0. Samples: 958490. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:11:10,572][00196] Avg episode reward: [(0, '21.635')] [2025-02-13 17:11:15,570][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 819.2). Total num frames: 3829760. Throughput: 0: 205.8. Samples: 958936. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:11:15,574][00196] Avg episode reward: [(0, '21.635')] [2025-02-13 17:11:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3833856. Throughput: 0: 206.4. Samples: 959958. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:11:20,581][00196] Avg episode reward: [(0, '21.591')] [2025-02-13 17:11:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3837952. Throughput: 0: 204.7. Samples: 961330. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:11:25,581][00196] Avg episode reward: [(0, '21.635')] [2025-02-13 17:11:30,574][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 819.2). Total num frames: 3842048. Throughput: 0: 206.3. Samples: 962058. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:11:30,579][00196] Avg episode reward: [(0, '21.537')] [2025-02-13 17:11:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3846144. Throughput: 0: 204.5. Samples: 963010. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:11:35,579][00196] Avg episode reward: [(0, '21.897')] [2025-02-13 17:11:39,152][10139] Updated weights for policy 0, policy_version 940 (0.2361) [2025-02-13 17:11:40,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3850240. Throughput: 0: 205.6. Samples: 964528. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:11:40,573][00196] Avg episode reward: [(0, '21.318')] [2025-02-13 17:11:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3854336. Throughput: 0: 203.5. Samples: 965062. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:11:45,576][00196] Avg episode reward: [(0, '21.455')] [2025-02-13 17:11:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3858432. Throughput: 0: 204.0. Samples: 966082. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:11:50,573][00196] Avg episode reward: [(0, '21.744')] [2025-02-13 17:11:53,855][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000943_3862528.pth... [2025-02-13 17:11:53,968][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000895_3665920.pth [2025-02-13 17:11:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3862528. Throughput: 0: 204.1. Samples: 967674. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:11:55,572][00196] Avg episode reward: [(0, '21.888')] [2025-02-13 17:12:00,574][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 819.2). Total num frames: 3866624. Throughput: 0: 203.9. Samples: 968114. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:12:00,578][00196] Avg episode reward: [(0, '21.747')] [2025-02-13 17:12:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3870720. Throughput: 0: 204.6. Samples: 969166. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:12:05,573][00196] Avg episode reward: [(0, '21.869')] [2025-02-13 17:12:10,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3874816. Throughput: 0: 200.0. Samples: 970332. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:12:10,575][00196] Avg episode reward: [(0, '21.947')] [2025-02-13 17:12:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3878912. Throughput: 0: 203.8. Samples: 971228. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:12:15,577][00196] Avg episode reward: [(0, '21.445')] [2025-02-13 17:12:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3878912. Throughput: 0: 206.0. Samples: 972280. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:12:20,572][00196] Avg episode reward: [(0, '21.370')] [2025-02-13 17:12:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3887104. Throughput: 0: 198.0. Samples: 973438. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:12:25,577][00196] Avg episode reward: [(0, '21.501')] [2025-02-13 17:12:29,820][10139] Updated weights for policy 0, policy_version 950 (0.1043) [2025-02-13 17:12:30,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.3, 300 sec: 819.2). Total num frames: 3891200. Throughput: 0: 205.5. Samples: 974308. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:12:30,572][00196] Avg episode reward: [(0, '21.251')] [2025-02-13 17:12:34,271][10122] Signal inference workers to stop experience collection... (950 times) [2025-02-13 17:12:34,414][10139] InferenceWorker_p0-w0: stopping experience collection (950 times) [2025-02-13 17:12:35,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 819.2). Total num frames: 3891200. Throughput: 0: 205.7. Samples: 975338. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:12:35,573][00196] Avg episode reward: [(0, '21.316')] [2025-02-13 17:12:36,430][10122] Signal inference workers to resume experience collection... (950 times) [2025-02-13 17:12:36,432][10139] InferenceWorker_p0-w0: resuming experience collection (950 times) [2025-02-13 17:12:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3895296. Throughput: 0: 193.6. Samples: 976386. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:12:40,574][00196] Avg episode reward: [(0, '21.355')] [2025-02-13 17:12:45,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3903488. Throughput: 0: 202.2. Samples: 977212. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:12:45,573][00196] Avg episode reward: [(0, '22.055')] [2025-02-13 17:12:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3903488. Throughput: 0: 204.9. Samples: 978386. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:12:50,573][00196] Avg episode reward: [(0, '22.706')] [2025-02-13 17:12:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3907584. Throughput: 0: 202.2. Samples: 979430. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:12:55,582][00196] Avg episode reward: [(0, '22.586')] [2025-02-13 17:13:00,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.3, 300 sec: 819.2). Total num frames: 3915776. Throughput: 0: 204.4. Samples: 980426. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:13:00,572][00196] Avg episode reward: [(0, '22.760')] [2025-02-13 17:13:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3915776. Throughput: 0: 204.2. Samples: 981468. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:13:05,574][00196] Avg episode reward: [(0, '23.155')] [2025-02-13 17:13:10,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3919872. Throughput: 0: 201.6. Samples: 982508. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:13:10,579][00196] Avg episode reward: [(0, '22.952')] [2025-02-13 17:13:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3923968. Throughput: 0: 197.2. Samples: 983180. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:13:15,579][00196] Avg episode reward: [(0, '22.659')] [2025-02-13 17:13:20,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3928064. Throughput: 0: 202.8. Samples: 984464. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:13:20,576][00196] Avg episode reward: [(0, '23.827')] [2025-02-13 17:13:21,765][10139] Updated weights for policy 0, policy_version 960 (0.2107) [2025-02-13 17:13:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3932160. Throughput: 0: 203.3. Samples: 985534. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:13:25,577][00196] Avg episode reward: [(0, '23.228')] [2025-02-13 17:13:30,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 3940352. Throughput: 0: 201.5. Samples: 986278. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:13:30,573][00196] Avg episode reward: [(0, '23.216')] [2025-02-13 17:13:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3940352. Throughput: 0: 203.4. Samples: 987538. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:13:35,573][00196] Avg episode reward: [(0, '23.655')] [2025-02-13 17:13:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3944448. Throughput: 0: 203.9. Samples: 988606. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:13:40,572][00196] Avg episode reward: [(0, '23.489')] [2025-02-13 17:13:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3948544. Throughput: 0: 195.3. Samples: 989214. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:13:45,584][00196] Avg episode reward: [(0, '23.207')] [2025-02-13 17:13:50,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3952640. Throughput: 0: 198.2. Samples: 990386. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:13:50,579][00196] Avg episode reward: [(0, '23.273')] [2025-02-13 17:13:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3956736. Throughput: 0: 203.8. Samples: 991678. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:13:55,577][00196] Avg episode reward: [(0, '22.946')] [2025-02-13 17:13:56,635][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000967_3960832.pth... [2025-02-13 17:13:56,754][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000919_3764224.pth [2025-02-13 17:14:00,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3960832. Throughput: 0: 204.4. Samples: 992376. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:14:00,584][00196] Avg episode reward: [(0, '22.281')] [2025-02-13 17:14:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3964928. Throughput: 0: 202.4. Samples: 993574. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:14:05,574][00196] Avg episode reward: [(0, '22.020')] [2025-02-13 17:14:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3969024. Throughput: 0: 199.9. Samples: 994530. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:14:10,572][00196] Avg episode reward: [(0, '22.159')] [2025-02-13 17:14:12,859][10139] Updated weights for policy 0, policy_version 970 (0.2897) [2025-02-13 17:14:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3973120. Throughput: 0: 193.6. Samples: 994990. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:14:15,577][00196] Avg episode reward: [(0, '22.507')] [2025-02-13 17:14:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3977216. Throughput: 0: 203.8. Samples: 996710. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:14:20,572][00196] Avg episode reward: [(0, '22.572')] [2025-02-13 17:14:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3981312. Throughput: 0: 206.1. Samples: 997882. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:14:25,583][00196] Avg episode reward: [(0, '21.991')] [2025-02-13 17:14:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 3985408. Throughput: 0: 206.2. Samples: 998494. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:14:30,578][00196] Avg episode reward: [(0, '22.344')] [2025-02-13 17:14:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3989504. Throughput: 0: 211.7. Samples: 999914. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:14:35,573][00196] Avg episode reward: [(0, '21.808')] [2025-02-13 17:14:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3993600. Throughput: 0: 207.2. Samples: 1001002. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:14:40,578][00196] Avg episode reward: [(0, '20.724')] [2025-02-13 17:14:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 3997696. Throughput: 0: 205.3. Samples: 1001616. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:14:45,583][00196] Avg episode reward: [(0, '20.465')] [2025-02-13 17:14:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4001792. Throughput: 0: 210.8. Samples: 1003060. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:14:50,572][00196] Avg episode reward: [(0, '20.158')] [2025-02-13 17:14:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4005888. Throughput: 0: 211.7. Samples: 1004056. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:14:55,572][00196] Avg episode reward: [(0, '20.946')] [2025-02-13 17:15:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4009984. Throughput: 0: 216.3. Samples: 1004722. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:00,582][00196] Avg episode reward: [(0, '21.158')] [2025-02-13 17:15:01,843][10139] Updated weights for policy 0, policy_version 980 (0.0632) [2025-02-13 17:15:05,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4014080. Throughput: 0: 206.2. Samples: 1005988. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:05,576][00196] Avg episode reward: [(0, '21.334')] [2025-02-13 17:15:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4018176. Throughput: 0: 200.4. Samples: 1006900. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:10,582][00196] Avg episode reward: [(0, '21.432')] [2025-02-13 17:15:15,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4022272. Throughput: 0: 203.1. Samples: 1007634. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:15,573][00196] Avg episode reward: [(0, '20.538')] [2025-02-13 17:15:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4026368. Throughput: 0: 204.6. Samples: 1009120. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:20,572][00196] Avg episode reward: [(0, '20.023')] [2025-02-13 17:15:25,572][00196] Fps is (10 sec: 818.9, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4030464. Throughput: 0: 199.7. Samples: 1009988. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:25,580][00196] Avg episode reward: [(0, '19.967')] [2025-02-13 17:15:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4034560. Throughput: 0: 198.5. Samples: 1010548. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:30,582][00196] Avg episode reward: [(0, '19.900')] [2025-02-13 17:15:35,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4038656. Throughput: 0: 203.1. Samples: 1012200. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:35,573][00196] Avg episode reward: [(0, '20.003')] [2025-02-13 17:15:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4042752. Throughput: 0: 198.5. Samples: 1012988. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:40,576][00196] Avg episode reward: [(0, '19.479')] [2025-02-13 17:15:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4046848. Throughput: 0: 190.5. Samples: 1013296. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:45,575][00196] Avg episode reward: [(0, '19.593')] [2025-02-13 17:15:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4050944. Throughput: 0: 194.1. Samples: 1014720. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:50,581][00196] Avg episode reward: [(0, '20.369')] [2025-02-13 17:15:53,234][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000990_4055040.pth... [2025-02-13 17:15:53,239][10139] Updated weights for policy 0, policy_version 990 (0.0550) [2025-02-13 17:15:53,364][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000943_3862528.pth [2025-02-13 17:15:55,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4055040. Throughput: 0: 196.9. Samples: 1015760. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:15:55,574][00196] Avg episode reward: [(0, '20.156')] [2025-02-13 17:16:00,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4059136. Throughput: 0: 193.5. Samples: 1016340. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:16:00,573][00196] Avg episode reward: [(0, '20.240')] [2025-02-13 17:16:05,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4063232. Throughput: 0: 184.0. Samples: 1017398. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:16:05,571][00196] Avg episode reward: [(0, '20.350')] [2025-02-13 17:16:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4067328. Throughput: 0: 190.3. Samples: 1018552. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:16:10,576][00196] Avg episode reward: [(0, '21.109')] [2025-02-13 17:16:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4067328. Throughput: 0: 191.2. Samples: 1019154. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:16:15,578][00196] Avg episode reward: [(0, '21.869')] [2025-02-13 17:16:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4075520. Throughput: 0: 182.4. Samples: 1020406. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:16:20,581][00196] Avg episode reward: [(0, '21.422')] [2025-02-13 17:16:25,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4079616. Throughput: 0: 195.9. Samples: 1021802. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:16:25,573][00196] Avg episode reward: [(0, '20.986')] [2025-02-13 17:16:30,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4079616. Throughput: 0: 202.8. Samples: 1022422. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:16:30,576][00196] Avg episode reward: [(0, '20.522')] [2025-02-13 17:16:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4087808. Throughput: 0: 195.8. Samples: 1023532. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:16:35,575][00196] Avg episode reward: [(0, '20.388')] [2025-02-13 17:16:40,575][00196] Fps is (10 sec: 1228.1, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 4091904. Throughput: 0: 191.2. Samples: 1024366. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:16:40,578][00196] Avg episode reward: [(0, '20.498')] [2025-02-13 17:16:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4091904. Throughput: 0: 195.4. Samples: 1025132. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:16:45,572][00196] Avg episode reward: [(0, '20.999')] [2025-02-13 17:16:46,890][10139] Updated weights for policy 0, policy_version 1000 (0.1597) [2025-02-13 17:16:49,490][10122] Signal inference workers to stop experience collection... (1000 times) [2025-02-13 17:16:49,547][10139] InferenceWorker_p0-w0: stopping experience collection (1000 times) [2025-02-13 17:16:50,569][00196] Fps is (10 sec: 409.8, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4096000. Throughput: 0: 203.2. Samples: 1026542. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:16:50,575][00196] Avg episode reward: [(0, '21.032')] [2025-02-13 17:16:51,304][10122] Signal inference workers to resume experience collection... (1000 times) [2025-02-13 17:16:51,305][10139] InferenceWorker_p0-w0: resuming experience collection (1000 times) [2025-02-13 17:16:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 791.4). Total num frames: 4100096. Throughput: 0: 201.2. Samples: 1027608. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:16:55,575][00196] Avg episode reward: [(0, '19.486')] [2025-02-13 17:17:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4104192. Throughput: 0: 202.8. Samples: 1028278. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:17:00,572][00196] Avg episode reward: [(0, '19.330')] [2025-02-13 17:17:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4108288. Throughput: 0: 198.5. Samples: 1029338. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:17:05,573][00196] Avg episode reward: [(0, '19.496')] [2025-02-13 17:17:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4112384. Throughput: 0: 197.4. Samples: 1030686. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:17:10,579][00196] Avg episode reward: [(0, '19.674')] [2025-02-13 17:17:15,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4116480. Throughput: 0: 190.9. Samples: 1031014. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:17:15,573][00196] Avg episode reward: [(0, '19.634')] [2025-02-13 17:17:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4120576. Throughput: 0: 189.5. Samples: 1032060. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:17:20,574][00196] Avg episode reward: [(0, '19.937')] [2025-02-13 17:17:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4124672. Throughput: 0: 206.7. Samples: 1033668. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:17:25,576][00196] Avg episode reward: [(0, '20.087')] [2025-02-13 17:17:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4128768. Throughput: 0: 198.2. Samples: 1034050. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:17:30,574][00196] Avg episode reward: [(0, '20.215')] [2025-02-13 17:17:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 4132864. Throughput: 0: 186.4. Samples: 1034928. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:17:35,572][00196] Avg episode reward: [(0, '20.207')] [2025-02-13 17:17:39,199][10139] Updated weights for policy 0, policy_version 1010 (0.2172) [2025-02-13 17:17:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 791.4). Total num frames: 4136960. Throughput: 0: 192.5. Samples: 1036270. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:17:40,572][00196] Avg episode reward: [(0, '20.049')] [2025-02-13 17:17:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4141056. Throughput: 0: 191.3. Samples: 1036888. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:17:45,577][00196] Avg episode reward: [(0, '20.064')] [2025-02-13 17:17:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4145152. Throughput: 0: 191.0. Samples: 1037932. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:17:50,572][00196] Avg episode reward: [(0, '20.318')] [2025-02-13 17:17:54,409][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001013_4149248.pth... [2025-02-13 17:17:54,546][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000967_3960832.pth [2025-02-13 17:17:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4149248. Throughput: 0: 190.6. Samples: 1039264. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:17:55,573][00196] Avg episode reward: [(0, '19.900')] [2025-02-13 17:18:00,573][00196] Fps is (10 sec: 818.9, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 4153344. Throughput: 0: 198.4. Samples: 1039942. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:18:00,577][00196] Avg episode reward: [(0, '19.771')] [2025-02-13 17:18:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4157440. Throughput: 0: 197.8. Samples: 1040960. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:18:05,577][00196] Avg episode reward: [(0, '19.349')] [2025-02-13 17:18:10,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4161536. Throughput: 0: 175.3. Samples: 1041558. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:18:10,581][00196] Avg episode reward: [(0, '19.229')] [2025-02-13 17:18:15,574][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 4165632. Throughput: 0: 198.5. Samples: 1042984. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:18:15,582][00196] Avg episode reward: [(0, '19.040')] [2025-02-13 17:18:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4165632. Throughput: 0: 202.2. Samples: 1044026. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:18:20,574][00196] Avg episode reward: [(0, '18.668')] [2025-02-13 17:18:25,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4173824. Throughput: 0: 195.2. Samples: 1045056. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:18:25,580][00196] Avg episode reward: [(0, '18.891')] [2025-02-13 17:18:30,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4177920. Throughput: 0: 204.2. Samples: 1046078. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:18:30,573][00196] Avg episode reward: [(0, '19.213')] [2025-02-13 17:18:30,650][10139] Updated weights for policy 0, policy_version 1020 (0.2700) [2025-02-13 17:18:35,570][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4177920. Throughput: 0: 198.6. Samples: 1046868. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:18:35,572][00196] Avg episode reward: [(0, '19.244')] [2025-02-13 17:18:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4182016. Throughput: 0: 197.6. Samples: 1048156. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:18:40,574][00196] Avg episode reward: [(0, '19.986')] [2025-02-13 17:18:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4186112. Throughput: 0: 200.0. Samples: 1048940. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:18:45,580][00196] Avg episode reward: [(0, '20.853')] [2025-02-13 17:18:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4190208. Throughput: 0: 201.2. Samples: 1050016. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:18:50,573][00196] Avg episode reward: [(0, '20.862')] [2025-02-13 17:18:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4194304. Throughput: 0: 214.8. Samples: 1051224. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:18:55,583][00196] Avg episode reward: [(0, '20.763')] [2025-02-13 17:19:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 791.4). Total num frames: 4198400. Throughput: 0: 198.9. Samples: 1051934. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:19:00,579][00196] Avg episode reward: [(0, '22.126')] [2025-02-13 17:19:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4202496. Throughput: 0: 204.4. Samples: 1053224. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:19:05,572][00196] Avg episode reward: [(0, '22.872')] [2025-02-13 17:19:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4206592. Throughput: 0: 204.7. Samples: 1054266. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:19:10,572][00196] Avg episode reward: [(0, '22.459')] [2025-02-13 17:19:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 791.4). Total num frames: 4210688. Throughput: 0: 194.4. Samples: 1054826. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:19:15,582][00196] Avg episode reward: [(0, '23.087')] [2025-02-13 17:19:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4214784. Throughput: 0: 209.6. Samples: 1056302. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:19:20,571][00196] Avg episode reward: [(0, '23.155')] [2025-02-13 17:19:21,686][10139] Updated weights for policy 0, policy_version 1030 (0.1022) [2025-02-13 17:19:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4218880. Throughput: 0: 204.5. Samples: 1057358. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:19:25,573][00196] Avg episode reward: [(0, '24.052')] [2025-02-13 17:19:30,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4227072. Throughput: 0: 205.9. Samples: 1058204. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:19:30,572][00196] Avg episode reward: [(0, '24.461')] [2025-02-13 17:19:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4227072. Throughput: 0: 207.5. Samples: 1059354. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:19:35,573][00196] Avg episode reward: [(0, '24.330')] [2025-02-13 17:19:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4231168. Throughput: 0: 205.0. Samples: 1060448. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:19:40,572][00196] Avg episode reward: [(0, '24.021')] [2025-02-13 17:19:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4235264. Throughput: 0: 205.0. Samples: 1061160. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:19:45,580][00196] Avg episode reward: [(0, '24.163')] [2025-02-13 17:19:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4239360. Throughput: 0: 205.3. Samples: 1062464. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:19:50,571][00196] Avg episode reward: [(0, '24.226')] [2025-02-13 17:19:55,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4243456. Throughput: 0: 205.2. Samples: 1063502. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:19:55,583][00196] Avg episode reward: [(0, '24.081')] [2025-02-13 17:19:56,289][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001037_4247552.pth... [2025-02-13 17:19:56,405][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000990_4055040.pth [2025-02-13 17:20:00,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4247552. Throughput: 0: 211.2. Samples: 1064328. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:20:00,574][00196] Avg episode reward: [(0, '23.712')] [2025-02-13 17:20:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4251648. Throughput: 0: 199.6. Samples: 1065282. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:20:05,573][00196] Avg episode reward: [(0, '23.803')] [2025-02-13 17:20:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4255744. Throughput: 0: 204.8. Samples: 1066572. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:20:10,572][00196] Avg episode reward: [(0, '24.243')] [2025-02-13 17:20:12,002][10139] Updated weights for policy 0, policy_version 1040 (0.0653) [2025-02-13 17:20:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4259840. Throughput: 0: 198.5. Samples: 1067136. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:20:15,575][00196] Avg episode reward: [(0, '24.081')] [2025-02-13 17:20:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4263936. Throughput: 0: 198.4. Samples: 1068284. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:20:20,574][00196] Avg episode reward: [(0, '23.820')] [2025-02-13 17:20:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4268032. Throughput: 0: 204.0. Samples: 1069630. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:20:25,578][00196] Avg episode reward: [(0, '24.507')] [2025-02-13 17:20:30,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 4272128. Throughput: 0: 195.8. Samples: 1069970. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:20:30,572][00196] Avg episode reward: [(0, '24.660')] [2025-02-13 17:20:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4276224. Throughput: 0: 202.0. Samples: 1071554. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:20:35,572][00196] Avg episode reward: [(0, '25.330')] [2025-02-13 17:20:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4280320. Throughput: 0: 200.8. Samples: 1072540. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:20:40,576][00196] Avg episode reward: [(0, '25.035')] [2025-02-13 17:20:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4284416. Throughput: 0: 189.2. Samples: 1072840. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:20:45,572][00196] Avg episode reward: [(0, '25.026')] [2025-02-13 17:20:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4288512. Throughput: 0: 205.2. Samples: 1074516. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:20:50,577][00196] Avg episode reward: [(0, '24.612')] [2025-02-13 17:20:55,571][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4292608. Throughput: 0: 201.6. Samples: 1075644. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:20:55,584][00196] Avg episode reward: [(0, '23.998')] [2025-02-13 17:21:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4296704. Throughput: 0: 194.4. Samples: 1075886. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:21:00,574][00196] Avg episode reward: [(0, '24.041')] [2025-02-13 17:21:02,372][10139] Updated weights for policy 0, policy_version 1050 (0.2252) [2025-02-13 17:21:05,575][00196] Fps is (10 sec: 818.9, 60 sec: 819.1, 300 sec: 791.4). Total num frames: 4300800. Throughput: 0: 204.9. Samples: 1077504. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:21:05,581][00196] Avg episode reward: [(0, '23.051')] [2025-02-13 17:21:05,908][10122] Signal inference workers to stop experience collection... (1050 times) [2025-02-13 17:21:06,003][10139] InferenceWorker_p0-w0: stopping experience collection (1050 times) [2025-02-13 17:21:08,352][10122] Signal inference workers to resume experience collection... (1050 times) [2025-02-13 17:21:08,353][10139] InferenceWorker_p0-w0: resuming experience collection (1050 times) [2025-02-13 17:21:10,586][00196] Fps is (10 sec: 817.8, 60 sec: 819.0, 300 sec: 805.3). Total num frames: 4304896. Throughput: 0: 191.5. Samples: 1078250. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:21:10,592][00196] Avg episode reward: [(0, '23.069')] [2025-02-13 17:21:15,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4308992. Throughput: 0: 196.2. Samples: 1078798. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:21:15,574][00196] Avg episode reward: [(0, '22.520')] [2025-02-13 17:21:20,569][00196] Fps is (10 sec: 820.6, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4313088. Throughput: 0: 191.3. Samples: 1080164. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:21:20,577][00196] Avg episode reward: [(0, '22.774')] [2025-02-13 17:21:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4317184. Throughput: 0: 195.0. Samples: 1081316. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:21:25,571][00196] Avg episode reward: [(0, '22.297')] [2025-02-13 17:21:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4321280. Throughput: 0: 202.2. Samples: 1081938. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:21:30,575][00196] Avg episode reward: [(0, '21.538')] [2025-02-13 17:21:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 4325376. Throughput: 0: 196.1. Samples: 1083340. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:21:35,576][00196] Avg episode reward: [(0, '21.944')] [2025-02-13 17:21:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4329472. Throughput: 0: 192.1. Samples: 1084286. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:21:40,571][00196] Avg episode reward: [(0, '21.925')] [2025-02-13 17:21:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4333568. Throughput: 0: 202.8. Samples: 1085010. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:21:45,571][00196] Avg episode reward: [(0, '21.247')] [2025-02-13 17:21:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4337664. Throughput: 0: 193.9. Samples: 1086228. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:21:50,573][00196] Avg episode reward: [(0, '21.083')] [2025-02-13 17:21:54,506][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001060_4341760.pth... [2025-02-13 17:21:54,518][10139] Updated weights for policy 0, policy_version 1060 (0.1684) [2025-02-13 17:21:54,663][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001013_4149248.pth [2025-02-13 17:21:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4341760. Throughput: 0: 200.3. Samples: 1087262. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:21:55,575][00196] Avg episode reward: [(0, '21.228')] [2025-02-13 17:22:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4345856. Throughput: 0: 205.6. Samples: 1088048. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:22:00,574][00196] Avg episode reward: [(0, '21.228')] [2025-02-13 17:22:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 805.3). Total num frames: 4349952. Throughput: 0: 206.0. Samples: 1089432. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:22:05,577][00196] Avg episode reward: [(0, '21.200')] [2025-02-13 17:22:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.4, 300 sec: 805.3). Total num frames: 4354048. Throughput: 0: 198.2. Samples: 1090234. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:22:10,572][00196] Avg episode reward: [(0, '21.050')] [2025-02-13 17:22:15,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4358144. Throughput: 0: 204.5. Samples: 1091140. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:22:15,577][00196] Avg episode reward: [(0, '21.117')] [2025-02-13 17:22:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4362240. Throughput: 0: 201.6. Samples: 1092410. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:22:20,572][00196] Avg episode reward: [(0, '22.300')] [2025-02-13 17:22:25,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4366336. Throughput: 0: 203.0. Samples: 1093420. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:22:25,571][00196] Avg episode reward: [(0, '22.379')] [2025-02-13 17:22:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4370432. Throughput: 0: 204.5. Samples: 1094214. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:22:30,573][00196] Avg episode reward: [(0, '21.934')] [2025-02-13 17:22:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4374528. Throughput: 0: 207.8. Samples: 1095580. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:22:35,573][00196] Avg episode reward: [(0, '21.743')] [2025-02-13 17:22:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4378624. Throughput: 0: 205.9. Samples: 1096528. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:22:40,573][00196] Avg episode reward: [(0, '21.788')] [2025-02-13 17:22:44,290][10139] Updated weights for policy 0, policy_version 1070 (0.0057) [2025-02-13 17:22:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4382720. Throughput: 0: 204.9. Samples: 1097270. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:22:45,573][00196] Avg episode reward: [(0, '20.687')] [2025-02-13 17:22:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4386816. Throughput: 0: 204.0. Samples: 1098614. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:22:50,572][00196] Avg episode reward: [(0, '20.087')] [2025-02-13 17:22:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4390912. Throughput: 0: 208.5. Samples: 1099616. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:22:55,572][00196] Avg episode reward: [(0, '20.205')] [2025-02-13 17:23:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4395008. Throughput: 0: 204.6. Samples: 1100346. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:23:00,573][00196] Avg episode reward: [(0, '19.619')] [2025-02-13 17:23:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4399104. Throughput: 0: 204.3. Samples: 1101604. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:23:05,581][00196] Avg episode reward: [(0, '19.481')] [2025-02-13 17:23:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4403200. Throughput: 0: 209.2. Samples: 1102834. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:23:10,574][00196] Avg episode reward: [(0, '19.916')] [2025-02-13 17:23:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4407296. Throughput: 0: 204.1. Samples: 1103400. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:23:15,571][00196] Avg episode reward: [(0, '19.699')] [2025-02-13 17:23:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4411392. Throughput: 0: 205.2. Samples: 1104814. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:23:20,575][00196] Avg episode reward: [(0, '19.983')] [2025-02-13 17:23:25,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4415488. Throughput: 0: 204.7. Samples: 1105740. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:23:25,573][00196] Avg episode reward: [(0, '19.983')] [2025-02-13 17:23:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4419584. Throughput: 0: 204.2. Samples: 1106458. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:23:30,575][00196] Avg episode reward: [(0, '20.335')] [2025-02-13 17:23:33,863][10139] Updated weights for policy 0, policy_version 1080 (0.1151) [2025-02-13 17:23:35,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4423680. Throughput: 0: 202.1. Samples: 1107710. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:23:35,573][00196] Avg episode reward: [(0, '19.962')] [2025-02-13 17:23:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4427776. Throughput: 0: 206.9. Samples: 1108928. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:23:40,577][00196] Avg episode reward: [(0, '21.039')] [2025-02-13 17:23:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4431872. Throughput: 0: 203.0. Samples: 1109482. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:23:45,571][00196] Avg episode reward: [(0, '21.569')] [2025-02-13 17:23:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4435968. Throughput: 0: 206.8. Samples: 1110912. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:23:50,574][00196] Avg episode reward: [(0, '21.249')] [2025-02-13 17:23:52,570][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001084_4440064.pth... [2025-02-13 17:23:52,684][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001037_4247552.pth [2025-02-13 17:23:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4440064. Throughput: 0: 210.6. Samples: 1112312. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:23:55,579][00196] Avg episode reward: [(0, '21.505')] [2025-02-13 17:24:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4444160. Throughput: 0: 203.1. Samples: 1112538. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:24:00,577][00196] Avg episode reward: [(0, '21.191')] [2025-02-13 17:24:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4448256. Throughput: 0: 207.3. Samples: 1114144. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:24:05,576][00196] Avg episode reward: [(0, '20.900')] [2025-02-13 17:24:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4452352. Throughput: 0: 214.4. Samples: 1115386. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:24:10,577][00196] Avg episode reward: [(0, '20.647')] [2025-02-13 17:24:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4456448. Throughput: 0: 203.5. Samples: 1115614. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:24:15,572][00196] Avg episode reward: [(0, '19.629')] [2025-02-13 17:24:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4460544. Throughput: 0: 205.3. Samples: 1116950. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:24:20,572][00196] Avg episode reward: [(0, '19.420')] [2025-02-13 17:24:22,959][10139] Updated weights for policy 0, policy_version 1090 (0.0047) [2025-02-13 17:24:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4464640. Throughput: 0: 213.0. Samples: 1118512. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:24:25,575][00196] Avg episode reward: [(0, '19.068')] [2025-02-13 17:24:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4468736. Throughput: 0: 204.7. Samples: 1118692. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:24:30,574][00196] Avg episode reward: [(0, '19.042')] [2025-02-13 17:24:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4472832. Throughput: 0: 203.3. Samples: 1120060. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:24:35,573][00196] Avg episode reward: [(0, '20.152')] [2025-02-13 17:24:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4476928. Throughput: 0: 201.5. Samples: 1121380. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:24:40,576][00196] Avg episode reward: [(0, '20.245')] [2025-02-13 17:24:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4481024. Throughput: 0: 205.5. Samples: 1121784. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:24:45,577][00196] Avg episode reward: [(0, '20.108')] [2025-02-13 17:24:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4485120. Throughput: 0: 194.6. Samples: 1122902. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:24:50,571][00196] Avg episode reward: [(0, '20.567')] [2025-02-13 17:24:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4489216. Throughput: 0: 208.9. Samples: 1124786. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:24:55,574][00196] Avg episode reward: [(0, '20.469')] [2025-02-13 17:25:00,573][00196] Fps is (10 sec: 818.9, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4493312. Throughput: 0: 207.2. Samples: 1124938. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:25:00,576][00196] Avg episode reward: [(0, '20.682')] [2025-02-13 17:25:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4497408. Throughput: 0: 198.5. Samples: 1125882. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:25:05,575][00196] Avg episode reward: [(0, '20.432')] [2025-02-13 17:25:10,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4501504. Throughput: 0: 198.3. Samples: 1127434. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:25:10,572][00196] Avg episode reward: [(0, '20.740')] [2025-02-13 17:25:13,528][10139] Updated weights for policy 0, policy_version 1100 (0.2190) [2025-02-13 17:25:15,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4505600. Throughput: 0: 203.6. Samples: 1127854. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:25:15,576][00196] Avg episode reward: [(0, '20.794')] [2025-02-13 17:25:18,460][10122] Signal inference workers to stop experience collection... (1100 times) [2025-02-13 17:25:18,500][10139] InferenceWorker_p0-w0: stopping experience collection (1100 times) [2025-02-13 17:25:19,766][10122] Signal inference workers to resume experience collection... (1100 times) [2025-02-13 17:25:19,768][10139] InferenceWorker_p0-w0: resuming experience collection (1100 times) [2025-02-13 17:25:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4509696. Throughput: 0: 196.8. Samples: 1128914. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:25:20,572][00196] Avg episode reward: [(0, '20.851')] [2025-02-13 17:25:25,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4513792. Throughput: 0: 199.6. Samples: 1130364. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:25:25,572][00196] Avg episode reward: [(0, '20.729')] [2025-02-13 17:25:30,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4517888. Throughput: 0: 205.4. Samples: 1131026. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:25:30,576][00196] Avg episode reward: [(0, '21.315')] [2025-02-13 17:25:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4521984. Throughput: 0: 202.8. Samples: 1132026. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:25:35,578][00196] Avg episode reward: [(0, '20.667')] [2025-02-13 17:25:40,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4526080. Throughput: 0: 189.4. Samples: 1133310. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:25:40,572][00196] Avg episode reward: [(0, '20.716')] [2025-02-13 17:25:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4530176. Throughput: 0: 203.3. Samples: 1134086. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:25:45,576][00196] Avg episode reward: [(0, '20.828')] [2025-02-13 17:25:50,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 4530176. Throughput: 0: 204.2. Samples: 1135072. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:25:50,573][00196] Avg episode reward: [(0, '20.828')] [2025-02-13 17:25:55,504][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001108_4538368.pth... [2025-02-13 17:25:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4538368. Throughput: 0: 193.3. Samples: 1136134. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:25:55,573][00196] Avg episode reward: [(0, '21.141')] [2025-02-13 17:25:55,679][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001060_4341760.pth [2025-02-13 17:26:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 805.3). Total num frames: 4538368. Throughput: 0: 206.8. Samples: 1137160. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:26:00,580][00196] Avg episode reward: [(0, '20.932')] [2025-02-13 17:26:05,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.4). Total num frames: 4542464. Throughput: 0: 198.8. Samples: 1137858. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:26:05,581][00196] Avg episode reward: [(0, '21.704')] [2025-02-13 17:26:07,532][10139] Updated weights for policy 0, policy_version 1110 (0.1726) [2025-02-13 17:26:10,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 4546560. Throughput: 0: 194.9. Samples: 1139134. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:26:10,577][00196] Avg episode reward: [(0, '21.378')] [2025-02-13 17:26:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 4550656. Throughput: 0: 191.7. Samples: 1139654. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:26:15,573][00196] Avg episode reward: [(0, '21.725')] [2025-02-13 17:26:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 4554752. Throughput: 0: 193.5. Samples: 1140734. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:26:20,578][00196] Avg episode reward: [(0, '22.409')] [2025-02-13 17:26:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 4558848. Throughput: 0: 193.7. Samples: 1142026. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:26:25,580][00196] Avg episode reward: [(0, '22.306')] [2025-02-13 17:26:30,572][00196] Fps is (10 sec: 819.0, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 4562944. Throughput: 0: 185.5. Samples: 1142432. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:26:30,575][00196] Avg episode reward: [(0, '22.229')] [2025-02-13 17:26:35,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 4567040. Throughput: 0: 188.6. Samples: 1143558. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:26:35,582][00196] Avg episode reward: [(0, '22.607')] [2025-02-13 17:26:40,569][00196] Fps is (10 sec: 819.4, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 4571136. Throughput: 0: 195.2. Samples: 1144920. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:26:40,572][00196] Avg episode reward: [(0, '22.933')] [2025-02-13 17:26:45,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 4575232. Throughput: 0: 183.1. Samples: 1145400. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:26:45,574][00196] Avg episode reward: [(0, '22.747')] [2025-02-13 17:26:50,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4579328. Throughput: 0: 193.3. Samples: 1146556. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:26:50,579][00196] Avg episode reward: [(0, '22.147')] [2025-02-13 17:26:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 4583424. Throughput: 0: 189.7. Samples: 1147672. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:26:55,572][00196] Avg episode reward: [(0, '22.997')] [2025-02-13 17:26:58,679][10139] Updated weights for policy 0, policy_version 1120 (0.1184) [2025-02-13 17:27:00,573][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4587520. Throughput: 0: 194.3. Samples: 1148400. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:27:00,582][00196] Avg episode reward: [(0, '22.940')] [2025-02-13 17:27:05,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4591616. Throughput: 0: 195.9. Samples: 1149550. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:27:05,576][00196] Avg episode reward: [(0, '23.560')] [2025-02-13 17:27:10,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4595712. Throughput: 0: 192.7. Samples: 1150698. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:27:10,577][00196] Avg episode reward: [(0, '23.455')] [2025-02-13 17:27:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4599808. Throughput: 0: 201.2. Samples: 1151484. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:27:15,572][00196] Avg episode reward: [(0, '23.079')] [2025-02-13 17:27:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4603904. Throughput: 0: 201.0. Samples: 1152602. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:27:20,576][00196] Avg episode reward: [(0, '22.913')] [2025-02-13 17:27:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4608000. Throughput: 0: 197.5. Samples: 1153808. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:27:25,580][00196] Avg episode reward: [(0, '22.418')] [2025-02-13 17:27:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4612096. Throughput: 0: 203.2. Samples: 1154546. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:27:30,578][00196] Avg episode reward: [(0, '22.576')] [2025-02-13 17:27:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4616192. Throughput: 0: 207.8. Samples: 1155908. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:27:35,576][00196] Avg episode reward: [(0, '22.489')] [2025-02-13 17:27:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4620288. Throughput: 0: 201.9. Samples: 1156758. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:27:40,581][00196] Avg episode reward: [(0, '22.311')] [2025-02-13 17:27:45,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4624384. Throughput: 0: 206.1. Samples: 1157674. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:27:45,577][00196] Avg episode reward: [(0, '21.822')] [2025-02-13 17:27:48,262][10139] Updated weights for policy 0, policy_version 1130 (0.1741) [2025-02-13 17:27:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4628480. Throughput: 0: 211.2. Samples: 1159052. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:27:50,574][00196] Avg episode reward: [(0, '22.002')] [2025-02-13 17:27:54,823][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001131_4632576.pth... [2025-02-13 17:27:54,939][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001084_4440064.pth [2025-02-13 17:27:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4632576. Throughput: 0: 205.0. Samples: 1159924. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:27:55,576][00196] Avg episode reward: [(0, '21.738')] [2025-02-13 17:28:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4636672. Throughput: 0: 205.0. Samples: 1160708. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:28:00,573][00196] Avg episode reward: [(0, '22.083')] [2025-02-13 17:28:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4640768. Throughput: 0: 206.7. Samples: 1161902. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:28:05,571][00196] Avg episode reward: [(0, '22.116')] [2025-02-13 17:28:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4644864. Throughput: 0: 194.8. Samples: 1162572. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:28:10,576][00196] Avg episode reward: [(0, '22.116')] [2025-02-13 17:28:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4648960. Throughput: 0: 205.0. Samples: 1163772. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:28:15,579][00196] Avg episode reward: [(0, '22.001')] [2025-02-13 17:28:20,572][00196] Fps is (10 sec: 818.9, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4653056. Throughput: 0: 202.3. Samples: 1165012. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:28:20,578][00196] Avg episode reward: [(0, '21.636')] [2025-02-13 17:28:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4657152. Throughput: 0: 204.3. Samples: 1165952. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:28:25,580][00196] Avg episode reward: [(0, '21.767')] [2025-02-13 17:28:30,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4661248. Throughput: 0: 202.9. Samples: 1166806. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:28:30,572][00196] Avg episode reward: [(0, '22.045')] [2025-02-13 17:28:35,575][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 4665344. Throughput: 0: 203.4. Samples: 1168206. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:28:35,578][00196] Avg episode reward: [(0, '22.584')] [2025-02-13 17:28:38,732][10139] Updated weights for policy 0, policy_version 1140 (0.0560) [2025-02-13 17:28:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4669440. Throughput: 0: 206.1. Samples: 1169198. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:28:40,572][00196] Avg episode reward: [(0, '22.203')] [2025-02-13 17:28:45,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4673536. Throughput: 0: 203.8. Samples: 1169880. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:28:45,579][00196] Avg episode reward: [(0, '22.191')] [2025-02-13 17:28:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4677632. Throughput: 0: 205.2. Samples: 1171138. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:28:50,573][00196] Avg episode reward: [(0, '22.066')] [2025-02-13 17:28:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4681728. Throughput: 0: 216.4. Samples: 1172312. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:28:55,580][00196] Avg episode reward: [(0, '22.411')] [2025-02-13 17:29:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4685824. Throughput: 0: 203.3. Samples: 1172922. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:29:00,572][00196] Avg episode reward: [(0, '22.135')] [2025-02-13 17:29:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4689920. Throughput: 0: 206.6. Samples: 1174308. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:29:05,572][00196] Avg episode reward: [(0, '21.837')] [2025-02-13 17:29:10,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4694016. Throughput: 0: 210.9. Samples: 1175442. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:29:10,580][00196] Avg episode reward: [(0, '21.947')] [2025-02-13 17:29:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4698112. Throughput: 0: 203.8. Samples: 1175976. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:29:15,574][00196] Avg episode reward: [(0, '21.883')] [2025-02-13 17:29:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4702208. Throughput: 0: 198.2. Samples: 1177122. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:29:20,579][00196] Avg episode reward: [(0, '22.022')] [2025-02-13 17:29:25,574][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 4706304. Throughput: 0: 204.4. Samples: 1178396. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:29:25,584][00196] Avg episode reward: [(0, '22.437')] [2025-02-13 17:29:29,552][10139] Updated weights for policy 0, policy_version 1150 (0.1647) [2025-02-13 17:29:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4710400. Throughput: 0: 203.6. Samples: 1179042. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:29:30,573][00196] Avg episode reward: [(0, '22.457')] [2025-02-13 17:29:32,592][10122] Signal inference workers to stop experience collection... (1150 times) [2025-02-13 17:29:32,641][10139] InferenceWorker_p0-w0: stopping experience collection (1150 times) [2025-02-13 17:29:34,238][10122] Signal inference workers to resume experience collection... (1150 times) [2025-02-13 17:29:34,240][10139] InferenceWorker_p0-w0: resuming experience collection (1150 times) [2025-02-13 17:29:35,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.3, 300 sec: 805.3). Total num frames: 4714496. Throughput: 0: 199.5. Samples: 1180116. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:29:35,571][00196] Avg episode reward: [(0, '21.838')] [2025-02-13 17:29:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4718592. Throughput: 0: 203.3. Samples: 1181462. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:29:40,587][00196] Avg episode reward: [(0, '21.658')] [2025-02-13 17:29:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4722688. Throughput: 0: 202.9. Samples: 1182054. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:29:45,572][00196] Avg episode reward: [(0, '21.723')] [2025-02-13 17:29:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4726784. Throughput: 0: 196.2. Samples: 1183138. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:29:50,572][00196] Avg episode reward: [(0, '22.032')] [2025-02-13 17:29:53,445][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001155_4730880.pth... [2025-02-13 17:29:53,570][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001108_4538368.pth [2025-02-13 17:29:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4730880. Throughput: 0: 204.6. Samples: 1184648. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:29:55,572][00196] Avg episode reward: [(0, '22.432')] [2025-02-13 17:30:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4734976. Throughput: 0: 204.9. Samples: 1185198. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:30:00,574][00196] Avg episode reward: [(0, '22.472')] [2025-02-13 17:30:05,571][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4739072. Throughput: 0: 202.3. Samples: 1186226. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:30:05,575][00196] Avg episode reward: [(0, '22.408')] [2025-02-13 17:30:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4743168. Throughput: 0: 204.5. Samples: 1187596. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:30:10,572][00196] Avg episode reward: [(0, '22.521')] [2025-02-13 17:30:15,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4747264. Throughput: 0: 203.9. Samples: 1188218. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:30:15,577][00196] Avg episode reward: [(0, '22.726')] [2025-02-13 17:30:19,793][10139] Updated weights for policy 0, policy_version 1160 (0.1600) [2025-02-13 17:30:20,575][00196] Fps is (10 sec: 818.8, 60 sec: 819.1, 300 sec: 805.3). Total num frames: 4751360. Throughput: 0: 204.0. Samples: 1189296. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:30:20,583][00196] Avg episode reward: [(0, '22.329')] [2025-02-13 17:30:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 805.3). Total num frames: 4755456. Throughput: 0: 201.0. Samples: 1190506. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:30:25,573][00196] Avg episode reward: [(0, '22.393')] [2025-02-13 17:30:30,572][00196] Fps is (10 sec: 819.5, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4759552. Throughput: 0: 201.2. Samples: 1191108. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:30:30,577][00196] Avg episode reward: [(0, '21.775')] [2025-02-13 17:30:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4763648. Throughput: 0: 204.5. Samples: 1192342. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:30:35,578][00196] Avg episode reward: [(0, '22.138')] [2025-02-13 17:30:40,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4767744. Throughput: 0: 205.3. Samples: 1193886. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:30:40,572][00196] Avg episode reward: [(0, '22.192')] [2025-02-13 17:30:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4771840. Throughput: 0: 205.6. Samples: 1194452. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:30:45,577][00196] Avg episode reward: [(0, '21.817')] [2025-02-13 17:30:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 4775936. Throughput: 0: 205.6. Samples: 1195476. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:30:50,573][00196] Avg episode reward: [(0, '22.484')] [2025-02-13 17:30:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4780032. Throughput: 0: 214.6. Samples: 1197254. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:30:55,572][00196] Avg episode reward: [(0, '22.202')] [2025-02-13 17:31:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4784128. Throughput: 0: 206.8. Samples: 1197522. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:31:00,572][00196] Avg episode reward: [(0, '21.823')] [2025-02-13 17:31:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4788224. Throughput: 0: 211.4. Samples: 1198810. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:31:05,573][00196] Avg episode reward: [(0, '21.760')] [2025-02-13 17:31:07,717][10139] Updated weights for policy 0, policy_version 1170 (0.1117) [2025-02-13 17:31:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4792320. Throughput: 0: 222.4. Samples: 1200514. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:31:10,573][00196] Avg episode reward: [(0, '21.846')] [2025-02-13 17:31:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4796416. Throughput: 0: 213.4. Samples: 1200710. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:31:15,574][00196] Avg episode reward: [(0, '21.420')] [2025-02-13 17:31:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 819.2). Total num frames: 4800512. Throughput: 0: 213.9. Samples: 1201968. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:31:20,572][00196] Avg episode reward: [(0, '21.784')] [2025-02-13 17:31:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4804608. Throughput: 0: 215.5. Samples: 1203584. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:31:25,574][00196] Avg episode reward: [(0, '21.855')] [2025-02-13 17:31:30,575][00196] Fps is (10 sec: 818.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4808704. Throughput: 0: 214.1. Samples: 1204086. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:31:30,588][00196] Avg episode reward: [(0, '21.735')] [2025-02-13 17:31:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4812800. Throughput: 0: 212.2. Samples: 1205024. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:31:35,575][00196] Avg episode reward: [(0, '22.228')] [2025-02-13 17:31:40,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4816896. Throughput: 0: 209.7. Samples: 1206690. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:31:40,572][00196] Avg episode reward: [(0, '22.559')] [2025-02-13 17:31:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4820992. Throughput: 0: 212.1. Samples: 1207068. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:31:45,577][00196] Avg episode reward: [(0, '23.299')] [2025-02-13 17:31:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4825088. Throughput: 0: 206.9. Samples: 1208122. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:31:50,575][00196] Avg episode reward: [(0, '22.897')] [2025-02-13 17:31:52,384][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001179_4829184.pth... [2025-02-13 17:31:52,494][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001131_4632576.pth [2025-02-13 17:31:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4829184. Throughput: 0: 205.7. Samples: 1209770. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:31:55,573][00196] Avg episode reward: [(0, '22.199')] [2025-02-13 17:31:56,401][10139] Updated weights for policy 0, policy_version 1180 (0.1132) [2025-02-13 17:32:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4833280. Throughput: 0: 219.3. Samples: 1210578. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:32:00,574][00196] Avg episode reward: [(0, '22.174')] [2025-02-13 17:32:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4837376. Throughput: 0: 211.2. Samples: 1211472. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:32:05,572][00196] Avg episode reward: [(0, '22.271')] [2025-02-13 17:32:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4841472. Throughput: 0: 206.0. Samples: 1212854. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:32:10,573][00196] Avg episode reward: [(0, '22.073')] [2025-02-13 17:32:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4845568. Throughput: 0: 212.2. Samples: 1213636. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:32:15,579][00196] Avg episode reward: [(0, '22.147')] [2025-02-13 17:32:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4849664. Throughput: 0: 211.5. Samples: 1214542. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:32:20,574][00196] Avg episode reward: [(0, '22.174')] [2025-02-13 17:32:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4853760. Throughput: 0: 206.5. Samples: 1215984. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:32:25,579][00196] Avg episode reward: [(0, '21.824')] [2025-02-13 17:32:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.3, 300 sec: 819.2). Total num frames: 4857856. Throughput: 0: 210.5. Samples: 1216540. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:32:30,573][00196] Avg episode reward: [(0, '21.663')] [2025-02-13 17:32:35,571][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4861952. Throughput: 0: 209.7. Samples: 1217558. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:32:35,579][00196] Avg episode reward: [(0, '21.884')] [2025-02-13 17:32:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4866048. Throughput: 0: 204.0. Samples: 1218952. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:32:40,574][00196] Avg episode reward: [(0, '21.501')] [2025-02-13 17:32:45,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4870144. Throughput: 0: 198.1. Samples: 1219492. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:32:45,572][00196] Avg episode reward: [(0, '22.133')] [2025-02-13 17:32:46,057][10139] Updated weights for policy 0, policy_version 1190 (0.1630) [2025-02-13 17:32:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4874240. Throughput: 0: 211.4. Samples: 1220984. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:32:50,578][00196] Avg episode reward: [(0, '21.578')] [2025-02-13 17:32:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4878336. Throughput: 0: 205.6. Samples: 1222108. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:32:55,573][00196] Avg episode reward: [(0, '21.048')] [2025-02-13 17:33:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4882432. Throughput: 0: 203.7. Samples: 1222802. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:33:00,582][00196] Avg episode reward: [(0, '21.385')] [2025-02-13 17:33:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4886528. Throughput: 0: 212.1. Samples: 1224088. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:33:05,575][00196] Avg episode reward: [(0, '21.982')] [2025-02-13 17:33:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4890624. Throughput: 0: 202.0. Samples: 1225076. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:33:10,580][00196] Avg episode reward: [(0, '22.147')] [2025-02-13 17:33:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4894720. Throughput: 0: 204.1. Samples: 1225724. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:33:15,578][00196] Avg episode reward: [(0, '21.998')] [2025-02-13 17:33:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4898816. Throughput: 0: 211.7. Samples: 1227082. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:33:20,577][00196] Avg episode reward: [(0, '23.133')] [2025-02-13 17:33:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4902912. Throughput: 0: 204.0. Samples: 1228132. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:33:25,576][00196] Avg episode reward: [(0, '22.969')] [2025-02-13 17:33:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4907008. Throughput: 0: 209.5. Samples: 1228918. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:33:30,578][00196] Avg episode reward: [(0, '22.636')] [2025-02-13 17:33:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4911104. Throughput: 0: 205.5. Samples: 1230232. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 17:33:35,573][00196] Avg episode reward: [(0, '23.151')] [2025-02-13 17:33:36,095][10139] Updated weights for policy 0, policy_version 1200 (0.0560) [2025-02-13 17:33:39,960][10122] Signal inference workers to stop experience collection... (1200 times) [2025-02-13 17:33:40,030][10139] InferenceWorker_p0-w0: stopping experience collection (1200 times) [2025-02-13 17:33:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4915200. Throughput: 0: 204.0. Samples: 1231288. Policy #0 lag: (min: 1.0, avg: 1.3, max: 2.0) [2025-02-13 17:33:40,572][00196] Avg episode reward: [(0, '23.288')] [2025-02-13 17:33:41,530][10122] Signal inference workers to resume experience collection... (1200 times) [2025-02-13 17:33:41,532][10139] InferenceWorker_p0-w0: resuming experience collection (1200 times) [2025-02-13 17:33:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4919296. Throughput: 0: 201.2. Samples: 1231854. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:33:45,582][00196] Avg episode reward: [(0, '23.002')] [2025-02-13 17:33:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4923392. Throughput: 0: 205.3. Samples: 1233328. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:33:50,572][00196] Avg episode reward: [(0, '23.042')] [2025-02-13 17:33:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4927488. Throughput: 0: 206.5. Samples: 1234368. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:33:55,576][00196] Avg episode reward: [(0, '23.101')] [2025-02-13 17:33:56,127][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001204_4931584.pth... [2025-02-13 17:33:56,244][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001155_4730880.pth [2025-02-13 17:34:00,569][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 833.1). Total num frames: 4935680. Throughput: 0: 211.0. Samples: 1235220. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:34:00,573][00196] Avg episode reward: [(0, '23.119')] [2025-02-13 17:34:05,569][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 833.1). Total num frames: 4939776. Throughput: 0: 207.7. Samples: 1236430. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:34:05,572][00196] Avg episode reward: [(0, '22.941')] [2025-02-13 17:34:10,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4939776. Throughput: 0: 207.7. Samples: 1237480. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:34:10,575][00196] Avg episode reward: [(0, '23.079')] [2025-02-13 17:34:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4943872. Throughput: 0: 207.7. Samples: 1238266. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:34:15,584][00196] Avg episode reward: [(0, '23.420')] [2025-02-13 17:34:20,569][00196] Fps is (10 sec: 1228.8, 60 sec: 887.5, 300 sec: 833.1). Total num frames: 4952064. Throughput: 0: 206.2. Samples: 1239512. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:34:20,575][00196] Avg episode reward: [(0, '24.159')] [2025-02-13 17:34:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4952064. Throughput: 0: 206.0. Samples: 1240556. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:34:25,574][00196] Avg episode reward: [(0, '24.266')] [2025-02-13 17:34:26,199][10139] Updated weights for policy 0, policy_version 1210 (0.1713) [2025-02-13 17:34:30,570][00196] Fps is (10 sec: 819.2, 60 sec: 887.5, 300 sec: 833.1). Total num frames: 4960256. Throughput: 0: 213.0. Samples: 1241438. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:34:30,584][00196] Avg episode reward: [(0, '24.556')] [2025-02-13 17:34:35,570][00196] Fps is (10 sec: 1228.7, 60 sec: 887.5, 300 sec: 833.1). Total num frames: 4964352. Throughput: 0: 205.9. Samples: 1242592. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:34:35,573][00196] Avg episode reward: [(0, '24.520')] [2025-02-13 17:34:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4964352. Throughput: 0: 205.3. Samples: 1243606. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:34:40,575][00196] Avg episode reward: [(0, '24.431')] [2025-02-13 17:34:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4968448. Throughput: 0: 198.9. Samples: 1244170. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:34:45,574][00196] Avg episode reward: [(0, '24.280')] [2025-02-13 17:34:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4972544. Throughput: 0: 206.1. Samples: 1245704. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:34:50,576][00196] Avg episode reward: [(0, '23.760')] [2025-02-13 17:34:55,575][00196] Fps is (10 sec: 818.7, 60 sec: 819.1, 300 sec: 819.2). Total num frames: 4976640. Throughput: 0: 201.7. Samples: 1246558. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:34:55,585][00196] Avg episode reward: [(0, '24.113')] [2025-02-13 17:35:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 819.2). Total num frames: 4980736. Throughput: 0: 195.6. Samples: 1247070. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:35:00,573][00196] Avg episode reward: [(0, '24.029')] [2025-02-13 17:35:05,569][00196] Fps is (10 sec: 819.7, 60 sec: 750.9, 300 sec: 819.2). Total num frames: 4984832. Throughput: 0: 202.9. Samples: 1248642. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:35:05,580][00196] Avg episode reward: [(0, '23.683')] [2025-02-13 17:35:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4988928. Throughput: 0: 199.6. Samples: 1249540. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:35:10,574][00196] Avg episode reward: [(0, '23.892')] [2025-02-13 17:35:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4993024. Throughput: 0: 188.7. Samples: 1249928. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:35:15,573][00196] Avg episode reward: [(0, '23.257')] [2025-02-13 17:35:18,224][10139] Updated weights for policy 0, policy_version 1220 (0.2300) [2025-02-13 17:35:20,570][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 819.2). Total num frames: 4997120. Throughput: 0: 195.8. Samples: 1251404. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:35:20,574][00196] Avg episode reward: [(0, '23.486')] [2025-02-13 17:35:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 5001216. Throughput: 0: 203.9. Samples: 1252780. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:35:25,577][00196] Avg episode reward: [(0, '22.424')] [2025-02-13 17:35:30,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 819.2). Total num frames: 5005312. Throughput: 0: 195.4. Samples: 1252964. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:35:30,575][00196] Avg episode reward: [(0, '22.850')] [2025-02-13 17:35:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 819.2). Total num frames: 5009408. Throughput: 0: 185.9. Samples: 1254070. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:35:35,584][00196] Avg episode reward: [(0, '23.186')] [2025-02-13 17:35:40,575][00196] Fps is (10 sec: 818.9, 60 sec: 819.1, 300 sec: 819.2). Total num frames: 5013504. Throughput: 0: 198.6. Samples: 1255496. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:35:40,578][00196] Avg episode reward: [(0, '22.941')] [2025-02-13 17:35:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 5013504. Throughput: 0: 198.8. Samples: 1256018. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:35:45,579][00196] Avg episode reward: [(0, '22.515')] [2025-02-13 17:35:50,569][00196] Fps is (10 sec: 819.6, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 5021696. Throughput: 0: 188.0. Samples: 1257104. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:35:50,573][00196] Avg episode reward: [(0, '22.234')] [2025-02-13 17:35:55,195][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001227_5025792.pth... [2025-02-13 17:35:55,369][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001179_4829184.pth [2025-02-13 17:35:55,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.3, 300 sec: 819.2). Total num frames: 5025792. Throughput: 0: 192.0. Samples: 1258180. Policy #0 lag: (min: 1.0, avg: 1.7, max: 3.0) [2025-02-13 17:35:55,573][00196] Avg episode reward: [(0, '22.332')] [2025-02-13 17:36:00,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 5025792. Throughput: 0: 196.8. Samples: 1258784. Policy #0 lag: (min: 1.0, avg: 1.7, max: 3.0) [2025-02-13 17:36:00,571][00196] Avg episode reward: [(0, '22.279')] [2025-02-13 17:36:05,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 5029888. Throughput: 0: 186.6. Samples: 1259802. Policy #0 lag: (min: 1.0, avg: 1.7, max: 3.0) [2025-02-13 17:36:05,582][00196] Avg episode reward: [(0, '22.688')] [2025-02-13 17:36:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 5033984. Throughput: 0: 185.2. Samples: 1261114. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:36:10,580][00196] Avg episode reward: [(0, '22.567')] [2025-02-13 17:36:11,264][10139] Updated weights for policy 0, policy_version 1230 (0.1184) [2025-02-13 17:36:15,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 5038080. Throughput: 0: 190.6. Samples: 1261540. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:36:15,578][00196] Avg episode reward: [(0, '22.384')] [2025-02-13 17:36:20,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 5042176. Throughput: 0: 184.3. Samples: 1262362. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:36:20,580][00196] Avg episode reward: [(0, '23.121')] [2025-02-13 17:36:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 5046272. Throughput: 0: 185.4. Samples: 1263836. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:36:25,572][00196] Avg episode reward: [(0, '23.315')] [2025-02-13 17:36:30,571][00196] Fps is (10 sec: 819.0, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 5050368. Throughput: 0: 181.5. Samples: 1264186. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:36:30,575][00196] Avg episode reward: [(0, '23.511')] [2025-02-13 17:36:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 5054464. Throughput: 0: 179.0. Samples: 1265158. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:36:35,573][00196] Avg episode reward: [(0, '23.731')] [2025-02-13 17:36:40,569][00196] Fps is (10 sec: 819.4, 60 sec: 751.0, 300 sec: 805.3). Total num frames: 5058560. Throughput: 0: 188.2. Samples: 1266648. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:36:40,572][00196] Avg episode reward: [(0, '24.295')] [2025-02-13 17:36:45,571][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 5062656. Throughput: 0: 189.4. Samples: 1267306. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:36:45,581][00196] Avg episode reward: [(0, '23.568')] [2025-02-13 17:36:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 5066752. Throughput: 0: 187.5. Samples: 1268238. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:36:50,575][00196] Avg episode reward: [(0, '24.103')] [2025-02-13 17:36:55,569][00196] Fps is (10 sec: 819.4, 60 sec: 750.9, 300 sec: 805.3). Total num frames: 5070848. Throughput: 0: 186.8. Samples: 1269520. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:36:55,582][00196] Avg episode reward: [(0, '24.224')] [2025-02-13 17:37:00,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 5074944. Throughput: 0: 195.4. Samples: 1270332. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:37:00,579][00196] Avg episode reward: [(0, '25.075')] [2025-02-13 17:37:05,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5074944. Throughput: 0: 201.1. Samples: 1271412. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:37:05,580][00196] Avg episode reward: [(0, '25.022')] [2025-02-13 17:37:05,912][10139] Updated weights for policy 0, policy_version 1240 (0.1588) [2025-02-13 17:37:10,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 5083136. Throughput: 0: 191.2. Samples: 1272442. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:37:10,574][00196] Avg episode reward: [(0, '25.048')] [2025-02-13 17:37:15,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 5087232. Throughput: 0: 203.2. Samples: 1273328. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:37:15,580][00196] Avg episode reward: [(0, '25.380')] [2025-02-13 17:37:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5087232. Throughput: 0: 203.8. Samples: 1274328. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:37:20,573][00196] Avg episode reward: [(0, '25.346')] [2025-02-13 17:37:25,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5091328. Throughput: 0: 197.4. Samples: 1275532. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:37:25,575][00196] Avg episode reward: [(0, '25.715')] [2025-02-13 17:37:30,202][10122] Saving new best policy, reward=25.715! [2025-02-13 17:37:30,571][00196] Fps is (10 sec: 1228.6, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 5099520. Throughput: 0: 198.4. Samples: 1276232. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:37:30,576][00196] Avg episode reward: [(0, '25.482')] [2025-02-13 17:37:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5099520. Throughput: 0: 205.5. Samples: 1277486. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:37:35,575][00196] Avg episode reward: [(0, '25.370')] [2025-02-13 17:37:40,569][00196] Fps is (10 sec: 409.7, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5103616. Throughput: 0: 200.4. Samples: 1278536. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:37:40,582][00196] Avg episode reward: [(0, '25.387')] [2025-02-13 17:37:45,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 805.3). Total num frames: 5111808. Throughput: 0: 202.0. Samples: 1279424. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:37:45,578][00196] Avg episode reward: [(0, '25.308')] [2025-02-13 17:37:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5111808. Throughput: 0: 202.0. Samples: 1280500. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:37:50,579][00196] Avg episode reward: [(0, '25.361')] [2025-02-13 17:37:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5115904. Throughput: 0: 201.4. Samples: 1281506. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:37:55,575][00196] Avg episode reward: [(0, '25.662')] [2025-02-13 17:37:56,304][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001250_5120000.pth... [2025-02-13 17:37:56,309][10139] Updated weights for policy 0, policy_version 1250 (0.1475) [2025-02-13 17:37:56,453][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001204_4931584.pth [2025-02-13 17:37:59,187][10122] Signal inference workers to stop experience collection... (1250 times) [2025-02-13 17:37:59,233][10139] InferenceWorker_p0-w0: stopping experience collection (1250 times) [2025-02-13 17:38:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5120000. Throughput: 0: 197.8. Samples: 1282228. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:38:00,573][00196] Avg episode reward: [(0, '25.563')] [2025-02-13 17:38:00,837][10122] Signal inference workers to resume experience collection... (1250 times) [2025-02-13 17:38:00,840][10139] InferenceWorker_p0-w0: resuming experience collection (1250 times) [2025-02-13 17:38:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5124096. Throughput: 0: 203.2. Samples: 1283474. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:38:05,572][00196] Avg episode reward: [(0, '25.855')] [2025-02-13 17:38:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5128192. Throughput: 0: 198.1. Samples: 1284446. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:38:10,573][00196] Avg episode reward: [(0, '25.877')] [2025-02-13 17:38:11,914][10122] Saving new best policy, reward=25.855! [2025-02-13 17:38:12,045][10122] Saving new best policy, reward=25.877! [2025-02-13 17:38:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5132288. Throughput: 0: 197.1. Samples: 1285102. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:38:15,576][00196] Avg episode reward: [(0, '26.013')] [2025-02-13 17:38:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5136384. Throughput: 0: 205.0. Samples: 1286710. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:38:20,574][00196] Avg episode reward: [(0, '25.718')] [2025-02-13 17:38:21,367][10122] Saving new best policy, reward=26.013! [2025-02-13 17:38:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5140480. Throughput: 0: 198.5. Samples: 1287468. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:38:25,576][00196] Avg episode reward: [(0, '25.685')] [2025-02-13 17:38:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 791.4). Total num frames: 5144576. Throughput: 0: 190.7. Samples: 1288004. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:38:30,579][00196] Avg episode reward: [(0, '24.805')] [2025-02-13 17:38:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5148672. Throughput: 0: 197.2. Samples: 1289376. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:38:35,574][00196] Avg episode reward: [(0, '24.792')] [2025-02-13 17:38:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5152768. Throughput: 0: 198.6. Samples: 1290444. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:38:40,572][00196] Avg episode reward: [(0, '24.478')] [2025-02-13 17:38:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5156864. Throughput: 0: 191.2. Samples: 1290830. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:38:45,572][00196] Avg episode reward: [(0, '23.502')] [2025-02-13 17:38:47,692][10139] Updated weights for policy 0, policy_version 1260 (0.1131) [2025-02-13 17:38:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5160960. Throughput: 0: 200.5. Samples: 1292498. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:38:50,572][00196] Avg episode reward: [(0, '23.485')] [2025-02-13 17:38:55,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5165056. Throughput: 0: 203.5. Samples: 1293604. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:38:55,573][00196] Avg episode reward: [(0, '23.482')] [2025-02-13 17:39:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5169152. Throughput: 0: 196.9. Samples: 1293964. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:39:00,576][00196] Avg episode reward: [(0, '23.558')] [2025-02-13 17:39:05,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5173248. Throughput: 0: 195.4. Samples: 1295502. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:39:05,578][00196] Avg episode reward: [(0, '22.954')] [2025-02-13 17:39:10,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5177344. Throughput: 0: 206.9. Samples: 1296780. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:39:10,578][00196] Avg episode reward: [(0, '22.206')] [2025-02-13 17:39:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5181440. Throughput: 0: 199.6. Samples: 1296986. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:39:15,572][00196] Avg episode reward: [(0, '22.978')] [2025-02-13 17:39:20,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5185536. Throughput: 0: 199.2. Samples: 1298340. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:39:20,572][00196] Avg episode reward: [(0, '22.663')] [2025-02-13 17:39:25,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5189632. Throughput: 0: 206.3. Samples: 1299730. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:39:25,578][00196] Avg episode reward: [(0, '22.759')] [2025-02-13 17:39:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5193728. Throughput: 0: 203.8. Samples: 1300002. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:39:30,575][00196] Avg episode reward: [(0, '22.721')] [2025-02-13 17:39:35,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5197824. Throughput: 0: 190.7. Samples: 1301080. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:39:35,572][00196] Avg episode reward: [(0, '23.018')] [2025-02-13 17:39:38,369][10139] Updated weights for policy 0, policy_version 1270 (0.2133) [2025-02-13 17:39:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5201920. Throughput: 0: 198.1. Samples: 1302520. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:39:40,583][00196] Avg episode reward: [(0, '22.912')] [2025-02-13 17:39:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5206016. Throughput: 0: 201.5. Samples: 1303030. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:39:45,581][00196] Avg episode reward: [(0, '23.355')] [2025-02-13 17:39:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5210112. Throughput: 0: 194.6. Samples: 1304260. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:39:50,582][00196] Avg episode reward: [(0, '23.189')] [2025-02-13 17:39:53,613][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001273_5214208.pth... [2025-02-13 17:39:53,748][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001227_5025792.pth [2025-02-13 17:39:55,571][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5214208. Throughput: 0: 195.6. Samples: 1305584. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:39:55,575][00196] Avg episode reward: [(0, '23.114')] [2025-02-13 17:40:00,581][00196] Fps is (10 sec: 818.3, 60 sec: 819.0, 300 sec: 791.4). Total num frames: 5218304. Throughput: 0: 203.4. Samples: 1306142. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:40:00,585][00196] Avg episode reward: [(0, '23.114')] [2025-02-13 17:40:05,572][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5222400. Throughput: 0: 197.0. Samples: 1307204. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:40:05,579][00196] Avg episode reward: [(0, '22.394')] [2025-02-13 17:40:10,570][00196] Fps is (10 sec: 820.1, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5226496. Throughput: 0: 189.3. Samples: 1308246. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:40:10,576][00196] Avg episode reward: [(0, '22.612')] [2025-02-13 17:40:15,569][00196] Fps is (10 sec: 409.7, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5226496. Throughput: 0: 196.2. Samples: 1308832. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:40:15,573][00196] Avg episode reward: [(0, '21.827')] [2025-02-13 17:40:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5230592. Throughput: 0: 204.7. Samples: 1310290. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:40:20,573][00196] Avg episode reward: [(0, '21.607')] [2025-02-13 17:40:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 777.6). Total num frames: 5234688. Throughput: 0: 194.3. Samples: 1311262. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:40:25,574][00196] Avg episode reward: [(0, '21.689')] [2025-02-13 17:40:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5238784. Throughput: 0: 198.0. Samples: 1311940. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:40:30,575][00196] Avg episode reward: [(0, '22.462')] [2025-02-13 17:40:32,919][10139] Updated weights for policy 0, policy_version 1280 (0.1033) [2025-02-13 17:40:35,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.6). Total num frames: 5242880. Throughput: 0: 190.5. Samples: 1312832. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:40:35,574][00196] Avg episode reward: [(0, '22.328')] [2025-02-13 17:40:40,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5246976. Throughput: 0: 195.0. Samples: 1314358. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:40:40,580][00196] Avg episode reward: [(0, '22.182')] [2025-02-13 17:40:45,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5251072. Throughput: 0: 187.3. Samples: 1314568. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:40:45,576][00196] Avg episode reward: [(0, '22.731')] [2025-02-13 17:40:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5255168. Throughput: 0: 187.0. Samples: 1315620. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:40:50,578][00196] Avg episode reward: [(0, '22.899')] [2025-02-13 17:40:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 791.4). Total num frames: 5259264. Throughput: 0: 197.8. Samples: 1317148. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:40:55,572][00196] Avg episode reward: [(0, '23.152')] [2025-02-13 17:41:00,573][00196] Fps is (10 sec: 818.9, 60 sec: 751.0, 300 sec: 791.4). Total num frames: 5263360. Throughput: 0: 194.7. Samples: 1317594. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:41:00,577][00196] Avg episode reward: [(0, '23.174')] [2025-02-13 17:41:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 791.4). Total num frames: 5267456. Throughput: 0: 181.0. Samples: 1318434. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:41:05,572][00196] Avg episode reward: [(0, '22.772')] [2025-02-13 17:41:10,569][00196] Fps is (10 sec: 819.5, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5271552. Throughput: 0: 188.8. Samples: 1319756. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:41:10,574][00196] Avg episode reward: [(0, '22.607')] [2025-02-13 17:41:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5275648. Throughput: 0: 190.6. Samples: 1320518. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:41:15,578][00196] Avg episode reward: [(0, '22.919')] [2025-02-13 17:41:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5275648. Throughput: 0: 193.5. Samples: 1321540. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:41:20,582][00196] Avg episode reward: [(0, '22.538')] [2025-02-13 17:41:25,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.6). Total num frames: 5279744. Throughput: 0: 181.6. Samples: 1322532. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:41:25,578][00196] Avg episode reward: [(0, '21.918')] [2025-02-13 17:41:25,591][10139] Updated weights for policy 0, policy_version 1290 (0.1081) [2025-02-13 17:41:30,570][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5287936. Throughput: 0: 200.1. Samples: 1323572. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:41:30,574][00196] Avg episode reward: [(0, '22.219')] [2025-02-13 17:41:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5287936. Throughput: 0: 193.5. Samples: 1324328. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:41:35,574][00196] Avg episode reward: [(0, '22.514')] [2025-02-13 17:41:40,571][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5292032. Throughput: 0: 189.1. Samples: 1325660. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:41:40,574][00196] Avg episode reward: [(0, '22.117')] [2025-02-13 17:41:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5296128. Throughput: 0: 195.3. Samples: 1326382. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:41:45,579][00196] Avg episode reward: [(0, '21.952')] [2025-02-13 17:41:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5300224. Throughput: 0: 197.7. Samples: 1327332. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:41:50,576][00196] Avg episode reward: [(0, '21.438')] [2025-02-13 17:41:52,267][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001295_5304320.pth... [2025-02-13 17:41:52,373][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001250_5120000.pth [2025-02-13 17:41:55,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5304320. Throughput: 0: 197.6. Samples: 1328646. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:41:55,575][00196] Avg episode reward: [(0, '21.183')] [2025-02-13 17:42:00,569][00196] Fps is (10 sec: 819.3, 60 sec: 751.0, 300 sec: 791.4). Total num frames: 5308416. Throughput: 0: 192.6. Samples: 1329186. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:42:00,578][00196] Avg episode reward: [(0, '21.169')] [2025-02-13 17:42:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5312512. Throughput: 0: 193.0. Samples: 1330224. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:42:05,580][00196] Avg episode reward: [(0, '20.618')] [2025-02-13 17:42:10,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5316608. Throughput: 0: 195.3. Samples: 1331322. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:42:10,584][00196] Avg episode reward: [(0, '21.062')] [2025-02-13 17:42:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5320704. Throughput: 0: 180.8. Samples: 1331710. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:42:15,578][00196] Avg episode reward: [(0, '21.540')] [2025-02-13 17:42:17,949][10139] Updated weights for policy 0, policy_version 1300 (0.1771) [2025-02-13 17:42:20,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5324800. Throughput: 0: 193.2. Samples: 1333020. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:42:20,578][00196] Avg episode reward: [(0, '21.838')] [2025-02-13 17:42:23,248][10122] Signal inference workers to stop experience collection... (1300 times) [2025-02-13 17:42:23,439][10139] InferenceWorker_p0-w0: stopping experience collection (1300 times) [2025-02-13 17:42:24,935][10122] Signal inference workers to resume experience collection... (1300 times) [2025-02-13 17:42:24,938][10139] InferenceWorker_p0-w0: resuming experience collection (1300 times) [2025-02-13 17:42:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.6). Total num frames: 5328896. Throughput: 0: 181.6. Samples: 1333830. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:42:25,577][00196] Avg episode reward: [(0, '21.845')] [2025-02-13 17:42:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 791.4). Total num frames: 5332992. Throughput: 0: 188.1. Samples: 1334846. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:42:30,577][00196] Avg episode reward: [(0, '23.110')] [2025-02-13 17:42:35,570][00196] Fps is (10 sec: 819.1, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5337088. Throughput: 0: 194.4. Samples: 1336080. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:42:35,573][00196] Avg episode reward: [(0, '22.788')] [2025-02-13 17:42:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5341184. Throughput: 0: 179.8. Samples: 1336738. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:42:40,572][00196] Avg episode reward: [(0, '23.018')] [2025-02-13 17:42:45,569][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5345280. Throughput: 0: 191.1. Samples: 1337784. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:42:45,575][00196] Avg episode reward: [(0, '22.388')] [2025-02-13 17:42:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 791.4). Total num frames: 5349376. Throughput: 0: 193.6. Samples: 1338936. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:42:50,572][00196] Avg episode reward: [(0, '22.774')] [2025-02-13 17:42:55,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5349376. Throughput: 0: 190.2. Samples: 1339880. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:42:55,572][00196] Avg episode reward: [(0, '22.450')] [2025-02-13 17:43:00,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5353472. Throughput: 0: 197.0. Samples: 1340574. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:43:00,578][00196] Avg episode reward: [(0, '22.435')] [2025-02-13 17:43:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5357568. Throughput: 0: 199.1. Samples: 1341978. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:43:05,573][00196] Avg episode reward: [(0, '22.349')] [2025-02-13 17:43:10,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5361664. Throughput: 0: 203.5. Samples: 1342986. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:43:10,576][00196] Avg episode reward: [(0, '22.056')] [2025-02-13 17:43:12,707][10139] Updated weights for policy 0, policy_version 1310 (0.2684) [2025-02-13 17:43:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5365760. Throughput: 0: 188.7. Samples: 1343336. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:43:15,582][00196] Avg episode reward: [(0, '22.353')] [2025-02-13 17:43:20,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5369856. Throughput: 0: 196.5. Samples: 1344922. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:43:20,580][00196] Avg episode reward: [(0, '22.385')] [2025-02-13 17:43:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5373952. Throughput: 0: 201.9. Samples: 1345822. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:43:25,573][00196] Avg episode reward: [(0, '22.680')] [2025-02-13 17:43:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5378048. Throughput: 0: 189.2. Samples: 1346300. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:43:30,572][00196] Avg episode reward: [(0, '22.473')] [2025-02-13 17:43:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5382144. Throughput: 0: 193.6. Samples: 1347650. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:43:35,572][00196] Avg episode reward: [(0, '22.594')] [2025-02-13 17:43:40,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5386240. Throughput: 0: 200.2. Samples: 1348890. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:43:40,573][00196] Avg episode reward: [(0, '22.444')] [2025-02-13 17:43:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5390336. Throughput: 0: 189.3. Samples: 1349094. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:43:45,578][00196] Avg episode reward: [(0, '22.327')] [2025-02-13 17:43:50,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5394432. Throughput: 0: 191.4. Samples: 1350590. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:43:50,582][00196] Avg episode reward: [(0, '22.111')] [2025-02-13 17:43:52,581][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001318_5398528.pth... [2025-02-13 17:43:52,716][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001273_5214208.pth [2025-02-13 17:43:55,577][00196] Fps is (10 sec: 818.6, 60 sec: 819.1, 300 sec: 777.5). Total num frames: 5398528. Throughput: 0: 199.1. Samples: 1351948. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:43:55,582][00196] Avg episode reward: [(0, '22.146')] [2025-02-13 17:44:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5402624. Throughput: 0: 196.8. Samples: 1352192. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:44:00,572][00196] Avg episode reward: [(0, '22.533')] [2025-02-13 17:44:03,544][10139] Updated weights for policy 0, policy_version 1320 (0.1609) [2025-02-13 17:44:05,569][00196] Fps is (10 sec: 819.8, 60 sec: 819.2, 300 sec: 777.6). Total num frames: 5406720. Throughput: 0: 190.8. Samples: 1353506. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:44:05,573][00196] Avg episode reward: [(0, '24.121')] [2025-02-13 17:44:10,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5410816. Throughput: 0: 197.7. Samples: 1354720. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:44:10,584][00196] Avg episode reward: [(0, '24.447')] [2025-02-13 17:44:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5414912. Throughput: 0: 198.3. Samples: 1355224. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:44:15,585][00196] Avg episode reward: [(0, '24.447')] [2025-02-13 17:44:20,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 777.6). Total num frames: 5419008. Throughput: 0: 191.4. Samples: 1356264. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:44:20,577][00196] Avg episode reward: [(0, '24.545')] [2025-02-13 17:44:25,570][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5423104. Throughput: 0: 191.3. Samples: 1357498. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:44:25,574][00196] Avg episode reward: [(0, '24.098')] [2025-02-13 17:44:30,570][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5423104. Throughput: 0: 197.6. Samples: 1357988. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:44:30,572][00196] Avg episode reward: [(0, '24.677')] [2025-02-13 17:44:35,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5427200. Throughput: 0: 195.3. Samples: 1359380. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:44:35,573][00196] Avg episode reward: [(0, '24.886')] [2025-02-13 17:44:40,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5435392. Throughput: 0: 175.2. Samples: 1359830. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:44:40,582][00196] Avg episode reward: [(0, '24.495')] [2025-02-13 17:44:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5435392. Throughput: 0: 197.1. Samples: 1361060. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:44:45,572][00196] Avg episode reward: [(0, '24.369')] [2025-02-13 17:44:50,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5439488. Throughput: 0: 198.8. Samples: 1362452. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:44:50,573][00196] Avg episode reward: [(0, '24.049')] [2025-02-13 17:44:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 763.7). Total num frames: 5443584. Throughput: 0: 194.6. Samples: 1363476. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:44:55,573][00196] Avg episode reward: [(0, '24.045')] [2025-02-13 17:44:56,423][10139] Updated weights for policy 0, policy_version 1330 (0.3943) [2025-02-13 17:45:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5447680. Throughput: 0: 193.7. Samples: 1363942. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:45:00,575][00196] Avg episode reward: [(0, '24.051')] [2025-02-13 17:45:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5451776. Throughput: 0: 190.9. Samples: 1364856. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:45:05,573][00196] Avg episode reward: [(0, '23.755')] [2025-02-13 17:45:10,570][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 777.5). Total num frames: 5455872. Throughput: 0: 199.3. Samples: 1366466. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:45:10,577][00196] Avg episode reward: [(0, '23.966')] [2025-02-13 17:45:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5459968. Throughput: 0: 194.0. Samples: 1366716. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:45:15,578][00196] Avg episode reward: [(0, '23.411')] [2025-02-13 17:45:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5464064. Throughput: 0: 182.8. Samples: 1367608. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:45:20,572][00196] Avg episode reward: [(0, '23.573')] [2025-02-13 17:45:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5468160. Throughput: 0: 201.3. Samples: 1368890. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:45:25,573][00196] Avg episode reward: [(0, '23.913')] [2025-02-13 17:45:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5472256. Throughput: 0: 187.6. Samples: 1369504. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:45:30,572][00196] Avg episode reward: [(0, '24.335')] [2025-02-13 17:45:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5476352. Throughput: 0: 180.0. Samples: 1370552. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:45:35,580][00196] Avg episode reward: [(0, '24.213')] [2025-02-13 17:45:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5480448. Throughput: 0: 183.9. Samples: 1371750. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:45:40,573][00196] Avg episode reward: [(0, '24.271')] [2025-02-13 17:45:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5484544. Throughput: 0: 192.0. Samples: 1372584. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:45:45,572][00196] Avg episode reward: [(0, '23.894')] [2025-02-13 17:45:50,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5484544. Throughput: 0: 192.8. Samples: 1373530. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:45:50,581][00196] Avg episode reward: [(0, '23.841')] [2025-02-13 17:45:50,896][10139] Updated weights for policy 0, policy_version 1340 (0.1643) [2025-02-13 17:45:55,517][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001341_5492736.pth... [2025-02-13 17:45:55,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5492736. Throughput: 0: 180.9. Samples: 1374606. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:45:55,579][00196] Avg episode reward: [(0, '23.697')] [2025-02-13 17:45:55,638][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001295_5304320.pth [2025-02-13 17:46:00,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5496832. Throughput: 0: 198.5. Samples: 1375648. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:00,576][00196] Avg episode reward: [(0, '23.484')] [2025-02-13 17:46:05,569][00196] Fps is (10 sec: 409.7, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5496832. Throughput: 0: 197.5. Samples: 1376494. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:05,576][00196] Avg episode reward: [(0, '23.499')] [2025-02-13 17:46:10,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5500928. Throughput: 0: 195.4. Samples: 1377682. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:10,575][00196] Avg episode reward: [(0, '22.906')] [2025-02-13 17:46:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5505024. Throughput: 0: 196.0. Samples: 1378326. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:15,582][00196] Avg episode reward: [(0, '22.226')] [2025-02-13 17:46:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5509120. Throughput: 0: 196.1. Samples: 1379376. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:20,576][00196] Avg episode reward: [(0, '22.335')] [2025-02-13 17:46:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5513216. Throughput: 0: 198.8. Samples: 1380696. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:25,575][00196] Avg episode reward: [(0, '22.212')] [2025-02-13 17:46:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5517312. Throughput: 0: 190.3. Samples: 1381146. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:30,583][00196] Avg episode reward: [(0, '22.874')] [2025-02-13 17:46:35,577][00196] Fps is (10 sec: 818.6, 60 sec: 750.8, 300 sec: 777.5). Total num frames: 5521408. Throughput: 0: 193.6. Samples: 1382242. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:35,580][00196] Avg episode reward: [(0, '22.803')] [2025-02-13 17:46:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5525504. Throughput: 0: 199.3. Samples: 1383576. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:40,572][00196] Avg episode reward: [(0, '22.381')] [2025-02-13 17:46:42,818][10139] Updated weights for policy 0, policy_version 1350 (0.1154) [2025-02-13 17:46:45,509][10122] Signal inference workers to stop experience collection... (1350 times) [2025-02-13 17:46:45,553][10139] InferenceWorker_p0-w0: stopping experience collection (1350 times) [2025-02-13 17:46:45,569][00196] Fps is (10 sec: 819.8, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5529600. Throughput: 0: 186.9. Samples: 1384060. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:45,581][00196] Avg episode reward: [(0, '22.728')] [2025-02-13 17:46:47,333][10122] Signal inference workers to resume experience collection... (1350 times) [2025-02-13 17:46:47,334][10139] InferenceWorker_p0-w0: resuming experience collection (1350 times) [2025-02-13 17:46:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5533696. Throughput: 0: 195.8. Samples: 1385304. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:50,577][00196] Avg episode reward: [(0, '22.628')] [2025-02-13 17:46:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 777.5). Total num frames: 5537792. Throughput: 0: 193.6. Samples: 1386394. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:46:55,577][00196] Avg episode reward: [(0, '22.827')] [2025-02-13 17:47:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5541888. Throughput: 0: 191.7. Samples: 1386952. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:47:00,576][00196] Avg episode reward: [(0, '23.111')] [2025-02-13 17:47:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5545984. Throughput: 0: 195.8. Samples: 1388186. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:47:05,578][00196] Avg episode reward: [(0, '23.037')] [2025-02-13 17:47:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5550080. Throughput: 0: 188.2. Samples: 1389164. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:47:10,574][00196] Avg episode reward: [(0, '22.965')] [2025-02-13 17:47:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5554176. Throughput: 0: 196.8. Samples: 1390002. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:47:15,572][00196] Avg episode reward: [(0, '23.540')] [2025-02-13 17:47:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5558272. Throughput: 0: 195.7. Samples: 1391046. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:47:20,583][00196] Avg episode reward: [(0, '23.981')] [2025-02-13 17:47:25,573][00196] Fps is (10 sec: 818.9, 60 sec: 819.1, 300 sec: 777.5). Total num frames: 5562368. Throughput: 0: 189.4. Samples: 1392098. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:47:25,583][00196] Avg episode reward: [(0, '24.064')] [2025-02-13 17:47:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5566464. Throughput: 0: 198.4. Samples: 1392986. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:47:30,575][00196] Avg episode reward: [(0, '23.906')] [2025-02-13 17:47:34,640][10139] Updated weights for policy 0, policy_version 1360 (0.2101) [2025-02-13 17:47:35,569][00196] Fps is (10 sec: 819.5, 60 sec: 819.3, 300 sec: 777.5). Total num frames: 5570560. Throughput: 0: 195.3. Samples: 1394092. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:47:35,577][00196] Avg episode reward: [(0, '23.948')] [2025-02-13 17:47:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5570560. Throughput: 0: 193.5. Samples: 1395102. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:47:40,577][00196] Avg episode reward: [(0, '23.948')] [2025-02-13 17:47:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5574656. Throughput: 0: 192.5. Samples: 1395616. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:47:45,575][00196] Avg episode reward: [(0, '23.499')] [2025-02-13 17:47:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5578752. Throughput: 0: 199.3. Samples: 1397154. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:47:50,574][00196] Avg episode reward: [(0, '23.128')] [2025-02-13 17:47:55,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5582848. Throughput: 0: 197.8. Samples: 1398066. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:47:55,580][00196] Avg episode reward: [(0, '23.158')] [2025-02-13 17:47:57,514][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001364_5586944.pth... [2025-02-13 17:47:57,643][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001318_5398528.pth [2025-02-13 17:48:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5586944. Throughput: 0: 187.3. Samples: 1398430. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:48:00,575][00196] Avg episode reward: [(0, '23.683')] [2025-02-13 17:48:05,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5591040. Throughput: 0: 196.2. Samples: 1399874. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:48:05,577][00196] Avg episode reward: [(0, '23.987')] [2025-02-13 17:48:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5595136. Throughput: 0: 193.3. Samples: 1400798. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:48:10,574][00196] Avg episode reward: [(0, '24.019')] [2025-02-13 17:48:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5599232. Throughput: 0: 183.5. Samples: 1401242. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:48:15,578][00196] Avg episode reward: [(0, '24.000')] [2025-02-13 17:48:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5603328. Throughput: 0: 181.8. Samples: 1402274. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:48:20,572][00196] Avg episode reward: [(0, '23.507')] [2025-02-13 17:48:25,570][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 777.5). Total num frames: 5607424. Throughput: 0: 187.9. Samples: 1403558. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:48:25,581][00196] Avg episode reward: [(0, '23.507')] [2025-02-13 17:48:30,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 763.7). Total num frames: 5607424. Throughput: 0: 188.8. Samples: 1404114. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:48:30,578][00196] Avg episode reward: [(0, '23.777')] [2025-02-13 17:48:31,522][10139] Updated weights for policy 0, policy_version 1370 (0.2252) [2025-02-13 17:48:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5615616. Throughput: 0: 181.9. Samples: 1405338. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:48:35,584][00196] Avg episode reward: [(0, '23.837')] [2025-02-13 17:48:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5615616. Throughput: 0: 184.4. Samples: 1406366. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:48:40,572][00196] Avg episode reward: [(0, '24.015')] [2025-02-13 17:48:45,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5619712. Throughput: 0: 187.2. Samples: 1406852. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:48:45,576][00196] Avg episode reward: [(0, '24.395')] [2025-02-13 17:48:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5623808. Throughput: 0: 179.6. Samples: 1407958. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:48:50,581][00196] Avg episode reward: [(0, '23.723')] [2025-02-13 17:48:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 763.7). Total num frames: 5627904. Throughput: 0: 190.0. Samples: 1409350. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:48:55,572][00196] Avg episode reward: [(0, '23.790')] [2025-02-13 17:49:00,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5632000. Throughput: 0: 187.6. Samples: 1409686. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:49:00,577][00196] Avg episode reward: [(0, '23.807')] [2025-02-13 17:49:05,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5636096. Throughput: 0: 186.6. Samples: 1410672. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:49:05,576][00196] Avg episode reward: [(0, '23.533')] [2025-02-13 17:49:10,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5640192. Throughput: 0: 188.6. Samples: 1412046. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:49:10,582][00196] Avg episode reward: [(0, '23.187')] [2025-02-13 17:49:15,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5644288. Throughput: 0: 187.0. Samples: 1412530. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:49:15,572][00196] Avg episode reward: [(0, '23.203')] [2025-02-13 17:49:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5648384. Throughput: 0: 182.5. Samples: 1413550. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:49:20,572][00196] Avg episode reward: [(0, '23.539')] [2025-02-13 17:49:24,572][10139] Updated weights for policy 0, policy_version 1380 (0.2929) [2025-02-13 17:49:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 777.5). Total num frames: 5652480. Throughput: 0: 191.5. Samples: 1414984. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:49:25,580][00196] Avg episode reward: [(0, '22.897')] [2025-02-13 17:49:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 777.5). Total num frames: 5656576. Throughput: 0: 196.1. Samples: 1415676. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:49:30,577][00196] Avg episode reward: [(0, '23.596')] [2025-02-13 17:49:35,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 749.8). Total num frames: 5656576. Throughput: 0: 194.6. Samples: 1416716. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:49:35,582][00196] Avg episode reward: [(0, '24.072')] [2025-02-13 17:49:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5660672. Throughput: 0: 186.5. Samples: 1417744. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:49:40,582][00196] Avg episode reward: [(0, '23.612')] [2025-02-13 17:49:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5664768. Throughput: 0: 196.8. Samples: 1418544. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:49:45,572][00196] Avg episode reward: [(0, '23.576')] [2025-02-13 17:49:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5668864. Throughput: 0: 191.6. Samples: 1419294. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:49:50,573][00196] Avg episode reward: [(0, '23.680')] [2025-02-13 17:49:52,585][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001385_5672960.pth... [2025-02-13 17:49:52,708][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001341_5492736.pth [2025-02-13 17:49:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5672960. Throughput: 0: 193.3. Samples: 1420746. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:49:55,576][00196] Avg episode reward: [(0, '23.339')] [2025-02-13 17:50:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5677056. Throughput: 0: 186.7. Samples: 1420932. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:50:00,572][00196] Avg episode reward: [(0, '23.557')] [2025-02-13 17:50:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 763.7). Total num frames: 5681152. Throughput: 0: 183.6. Samples: 1421812. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:50:05,581][00196] Avg episode reward: [(0, '23.884')] [2025-02-13 17:50:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5685248. Throughput: 0: 177.6. Samples: 1422978. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:50:10,574][00196] Avg episode reward: [(0, '23.733')] [2025-02-13 17:50:15,575][00196] Fps is (10 sec: 818.7, 60 sec: 750.9, 300 sec: 763.6). Total num frames: 5689344. Throughput: 0: 182.5. Samples: 1423888. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:50:15,578][00196] Avg episode reward: [(0, '23.072')] [2025-02-13 17:50:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 749.8). Total num frames: 5689344. Throughput: 0: 181.9. Samples: 1424900. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:50:20,573][00196] Avg episode reward: [(0, '23.089')] [2025-02-13 17:50:21,059][10139] Updated weights for policy 0, policy_version 1390 (0.1740) [2025-02-13 17:50:25,569][00196] Fps is (10 sec: 409.9, 60 sec: 682.7, 300 sec: 749.8). Total num frames: 5693440. Throughput: 0: 181.8. Samples: 1425926. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:50:25,582][00196] Avg episode reward: [(0, '22.820')] [2025-02-13 17:50:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 682.7, 300 sec: 749.8). Total num frames: 5697536. Throughput: 0: 184.0. Samples: 1426822. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:50:30,577][00196] Avg episode reward: [(0, '22.578')] [2025-02-13 17:50:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5701632. Throughput: 0: 186.2. Samples: 1427674. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:50:35,573][00196] Avg episode reward: [(0, '22.986')] [2025-02-13 17:50:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5705728. Throughput: 0: 179.8. Samples: 1428836. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:50:40,572][00196] Avg episode reward: [(0, '23.210')] [2025-02-13 17:50:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5709824. Throughput: 0: 180.1. Samples: 1429038. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:50:45,573][00196] Avg episode reward: [(0, '24.331')] [2025-02-13 17:50:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5713920. Throughput: 0: 188.5. Samples: 1430296. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:50:50,577][00196] Avg episode reward: [(0, '24.689')] [2025-02-13 17:50:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5718016. Throughput: 0: 182.7. Samples: 1431198. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:50:55,580][00196] Avg episode reward: [(0, '24.853')] [2025-02-13 17:51:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5722112. Throughput: 0: 181.0. Samples: 1432032. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:51:00,577][00196] Avg episode reward: [(0, '25.177')] [2025-02-13 17:51:05,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5726208. Throughput: 0: 182.1. Samples: 1433096. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:51:05,575][00196] Avg episode reward: [(0, '24.690')] [2025-02-13 17:51:10,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 749.8). Total num frames: 5726208. Throughput: 0: 181.7. Samples: 1434104. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:51:10,575][00196] Avg episode reward: [(0, '24.316')] [2025-02-13 17:51:15,564][10139] Updated weights for policy 0, policy_version 1400 (0.1160) [2025-02-13 17:51:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 763.7). Total num frames: 5734400. Throughput: 0: 180.8. Samples: 1434956. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:51:15,581][00196] Avg episode reward: [(0, '24.836')] [2025-02-13 17:51:18,382][10122] Signal inference workers to stop experience collection... (1400 times) [2025-02-13 17:51:18,438][10139] InferenceWorker_p0-w0: stopping experience collection (1400 times) [2025-02-13 17:51:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5734400. Throughput: 0: 187.9. Samples: 1436130. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:51:20,579][00196] Avg episode reward: [(0, '24.764')] [2025-02-13 17:51:20,938][10122] Signal inference workers to resume experience collection... (1400 times) [2025-02-13 17:51:20,939][10139] InferenceWorker_p0-w0: resuming experience collection (1400 times) [2025-02-13 17:51:25,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5738496. Throughput: 0: 184.9. Samples: 1437156. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:51:25,579][00196] Avg episode reward: [(0, '24.629')] [2025-02-13 17:51:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5742592. Throughput: 0: 190.2. Samples: 1437598. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:51:30,583][00196] Avg episode reward: [(0, '25.183')] [2025-02-13 17:51:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5746688. Throughput: 0: 195.6. Samples: 1439098. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:51:35,574][00196] Avg episode reward: [(0, '24.432')] [2025-02-13 17:51:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5750784. Throughput: 0: 191.6. Samples: 1439820. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:51:40,576][00196] Avg episode reward: [(0, '24.253')] [2025-02-13 17:51:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5754880. Throughput: 0: 183.6. Samples: 1440292. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:51:45,574][00196] Avg episode reward: [(0, '25.193')] [2025-02-13 17:51:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5758976. Throughput: 0: 193.1. Samples: 1441786. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:51:50,573][00196] Avg episode reward: [(0, '25.341')] [2025-02-13 17:51:53,407][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001407_5763072.pth... [2025-02-13 17:51:53,543][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001364_5586944.pth [2025-02-13 17:51:55,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5763072. Throughput: 0: 189.6. Samples: 1442636. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:51:55,576][00196] Avg episode reward: [(0, '24.699')] [2025-02-13 17:52:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5767168. Throughput: 0: 184.0. Samples: 1443238. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:52:00,572][00196] Avg episode reward: [(0, '23.700')] [2025-02-13 17:52:05,570][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5771264. Throughput: 0: 184.4. Samples: 1444430. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:52:05,583][00196] Avg episode reward: [(0, '24.012')] [2025-02-13 17:52:10,534][10139] Updated weights for policy 0, policy_version 1410 (0.1758) [2025-02-13 17:52:10,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 749.8). Total num frames: 5775360. Throughput: 0: 182.3. Samples: 1445358. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:52:10,575][00196] Avg episode reward: [(0, '23.991')] [2025-02-13 17:52:15,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5779456. Throughput: 0: 192.4. Samples: 1446258. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:52:15,573][00196] Avg episode reward: [(0, '23.650')] [2025-02-13 17:52:20,571][00196] Fps is (10 sec: 819.3, 60 sec: 819.2, 300 sec: 749.8). Total num frames: 5783552. Throughput: 0: 184.8. Samples: 1447416. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:52:20,576][00196] Avg episode reward: [(0, '24.468')] [2025-02-13 17:52:25,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 735.9). Total num frames: 5783552. Throughput: 0: 190.5. Samples: 1448392. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:52:25,573][00196] Avg episode reward: [(0, '24.690')] [2025-02-13 17:52:30,569][00196] Fps is (10 sec: 409.7, 60 sec: 750.9, 300 sec: 735.9). Total num frames: 5787648. Throughput: 0: 193.3. Samples: 1448992. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:52:30,572][00196] Avg episode reward: [(0, '24.487')] [2025-02-13 17:52:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5791744. Throughput: 0: 192.4. Samples: 1450446. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:52:35,579][00196] Avg episode reward: [(0, '24.506')] [2025-02-13 17:52:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5795840. Throughput: 0: 189.5. Samples: 1451164. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:52:40,577][00196] Avg episode reward: [(0, '25.417')] [2025-02-13 17:52:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5799936. Throughput: 0: 186.4. Samples: 1451624. Policy #0 lag: (min: 1.0, avg: 1.4, max: 3.0) [2025-02-13 17:52:45,572][00196] Avg episode reward: [(0, '25.746')] [2025-02-13 17:52:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5804032. Throughput: 0: 191.6. Samples: 1453052. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:52:50,571][00196] Avg episode reward: [(0, '25.651')] [2025-02-13 17:52:55,571][00196] Fps is (10 sec: 819.0, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5808128. Throughput: 0: 196.6. Samples: 1454206. Policy #0 lag: (min: 1.0, avg: 1.4, max: 2.0) [2025-02-13 17:52:55,578][00196] Avg episode reward: [(0, '25.709')] [2025-02-13 17:53:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5812224. Throughput: 0: 184.3. Samples: 1454550. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:53:00,573][00196] Avg episode reward: [(0, '25.240')] [2025-02-13 17:53:03,950][10139] Updated weights for policy 0, policy_version 1420 (0.1100) [2025-02-13 17:53:05,569][00196] Fps is (10 sec: 819.4, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5816320. Throughput: 0: 184.5. Samples: 1455718. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:53:05,573][00196] Avg episode reward: [(0, '25.026')] [2025-02-13 17:53:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 749.8). Total num frames: 5820416. Throughput: 0: 189.6. Samples: 1456924. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:53:10,572][00196] Avg episode reward: [(0, '24.870')] [2025-02-13 17:53:15,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 735.9). Total num frames: 5820416. Throughput: 0: 188.2. Samples: 1457460. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:53:15,575][00196] Avg episode reward: [(0, '24.329')] [2025-02-13 17:53:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 749.8). Total num frames: 5828608. Throughput: 0: 182.8. Samples: 1458670. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:53:20,577][00196] Avg episode reward: [(0, '25.260')] [2025-02-13 17:53:25,569][00196] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 5832704. Throughput: 0: 182.3. Samples: 1459368. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:53:25,573][00196] Avg episode reward: [(0, '25.451')] [2025-02-13 17:53:30,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 735.9). Total num frames: 5832704. Throughput: 0: 190.1. Samples: 1460178. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:53:30,576][00196] Avg episode reward: [(0, '25.391')] [2025-02-13 17:53:35,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5836800. Throughput: 0: 187.0. Samples: 1461468. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:53:35,572][00196] Avg episode reward: [(0, '24.402')] [2025-02-13 17:53:40,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5840896. Throughput: 0: 191.2. Samples: 1462810. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:53:40,572][00196] Avg episode reward: [(0, '24.350')] [2025-02-13 17:53:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5844992. Throughput: 0: 188.0. Samples: 1463010. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:53:45,577][00196] Avg episode reward: [(0, '24.405')] [2025-02-13 17:53:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5849088. Throughput: 0: 184.5. Samples: 1464020. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:53:50,572][00196] Avg episode reward: [(0, '24.512')] [2025-02-13 17:53:53,263][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001429_5853184.pth... [2025-02-13 17:53:53,396][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001385_5672960.pth [2025-02-13 17:53:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 749.8). Total num frames: 5853184. Throughput: 0: 193.1. Samples: 1465614. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:53:55,574][00196] Avg episode reward: [(0, '24.569')] [2025-02-13 17:53:58,347][10139] Updated weights for policy 0, policy_version 1430 (0.1369) [2025-02-13 17:54:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5857280. Throughput: 0: 188.8. Samples: 1465954. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:54:00,574][00196] Avg episode reward: [(0, '24.946')] [2025-02-13 17:54:05,572][00196] Fps is (10 sec: 819.0, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5861376. Throughput: 0: 183.1. Samples: 1466908. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:54:05,577][00196] Avg episode reward: [(0, '25.032')] [2025-02-13 17:54:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5865472. Throughput: 0: 194.8. Samples: 1468132. Policy #0 lag: (min: 1.0, avg: 1.6, max: 3.0) [2025-02-13 17:54:10,582][00196] Avg episode reward: [(0, '25.399')] [2025-02-13 17:54:15,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 749.8). Total num frames: 5869568. Throughput: 0: 195.4. Samples: 1468970. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:54:15,573][00196] Avg episode reward: [(0, '25.357')] [2025-02-13 17:54:20,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 735.9). Total num frames: 5869568. Throughput: 0: 187.2. Samples: 1469894. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:54:20,574][00196] Avg episode reward: [(0, '25.626')] [2025-02-13 17:54:25,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 735.9). Total num frames: 5873664. Throughput: 0: 181.9. Samples: 1470994. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:54:25,580][00196] Avg episode reward: [(0, '25.837')] [2025-02-13 17:54:30,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5877760. Throughput: 0: 193.4. Samples: 1471712. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:54:30,572][00196] Avg episode reward: [(0, '25.586')] [2025-02-13 17:54:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5881856. Throughput: 0: 192.8. Samples: 1472694. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:54:35,572][00196] Avg episode reward: [(0, '26.047')] [2025-02-13 17:54:37,167][10122] Saving new best policy, reward=26.047! [2025-02-13 17:54:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5885952. Throughput: 0: 188.2. Samples: 1474082. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:54:40,579][00196] Avg episode reward: [(0, '26.172')] [2025-02-13 17:54:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5890048. Throughput: 0: 195.7. Samples: 1474760. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:54:45,575][00196] Avg episode reward: [(0, '26.548')] [2025-02-13 17:54:46,164][10122] Saving new best policy, reward=26.172! [2025-02-13 17:54:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5894144. Throughput: 0: 195.0. Samples: 1475682. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:54:50,575][00196] Avg episode reward: [(0, '26.919')] [2025-02-13 17:54:52,802][10122] Saving new best policy, reward=26.548! [2025-02-13 17:54:52,810][10139] Updated weights for policy 0, policy_version 1440 (0.1170) [2025-02-13 17:54:52,971][10122] Saving new best policy, reward=26.919! [2025-02-13 17:54:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5898240. Throughput: 0: 195.2. Samples: 1476918. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:54:55,583][00196] Avg episode reward: [(0, '27.459')] [2025-02-13 17:54:57,757][10122] Saving new best policy, reward=27.459! [2025-02-13 17:55:00,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5902336. Throughput: 0: 187.9. Samples: 1477426. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:55:00,583][00196] Avg episode reward: [(0, '27.204')] [2025-02-13 17:55:05,570][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 749.8). Total num frames: 5906432. Throughput: 0: 190.7. Samples: 1478476. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:55:05,575][00196] Avg episode reward: [(0, '27.584')] [2025-02-13 17:55:09,596][10122] Saving new best policy, reward=27.584! [2025-02-13 17:55:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5910528. Throughput: 0: 189.5. Samples: 1479522. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:55:10,572][00196] Avg episode reward: [(0, '26.874')] [2025-02-13 17:55:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5914624. Throughput: 0: 189.0. Samples: 1480216. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:55:15,573][00196] Avg episode reward: [(0, '26.874')] [2025-02-13 17:55:20,572][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 5918720. Throughput: 0: 191.0. Samples: 1481288. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:55:20,578][00196] Avg episode reward: [(0, '26.956')] [2025-02-13 17:55:25,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5918720. Throughput: 0: 181.5. Samples: 1482250. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:55:25,582][00196] Avg episode reward: [(0, '26.917')] [2025-02-13 17:55:30,569][00196] Fps is (10 sec: 819.4, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 5926912. Throughput: 0: 185.9. Samples: 1483124. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:55:30,579][00196] Avg episode reward: [(0, '26.854')] [2025-02-13 17:55:35,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5926912. Throughput: 0: 191.4. Samples: 1484296. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:55:35,573][00196] Avg episode reward: [(0, '26.826')] [2025-02-13 17:55:40,569][00196] Fps is (10 sec: 409.6, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5931008. Throughput: 0: 182.0. Samples: 1485110. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:55:40,580][00196] Avg episode reward: [(0, '26.643')] [2025-02-13 17:55:45,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5935104. Throughput: 0: 184.1. Samples: 1485712. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:55:45,582][00196] Avg episode reward: [(0, '26.372')] [2025-02-13 17:55:46,749][10139] Updated weights for policy 0, policy_version 1450 (0.0574) [2025-02-13 17:55:49,528][10122] Signal inference workers to stop experience collection... (1450 times) [2025-02-13 17:55:49,583][10139] InferenceWorker_p0-w0: stopping experience collection (1450 times) [2025-02-13 17:55:50,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5939200. Throughput: 0: 195.5. Samples: 1487272. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:55:50,573][00196] Avg episode reward: [(0, '26.573')] [2025-02-13 17:55:52,076][10122] Signal inference workers to resume experience collection... (1450 times) [2025-02-13 17:55:52,076][10139] InferenceWorker_p0-w0: resuming experience collection (1450 times) [2025-02-13 17:55:55,571][00196] Fps is (10 sec: 819.1, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5943296. Throughput: 0: 187.7. Samples: 1487970. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:55:55,576][00196] Avg episode reward: [(0, '26.490')] [2025-02-13 17:55:58,539][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001452_5947392.pth... [2025-02-13 17:55:58,657][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001407_5763072.pth [2025-02-13 17:56:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5947392. Throughput: 0: 182.6. Samples: 1488432. Policy #0 lag: (min: 1.0, avg: 1.5, max: 3.0) [2025-02-13 17:56:00,573][00196] Avg episode reward: [(0, '26.100')] [2025-02-13 17:56:05,569][00196] Fps is (10 sec: 819.3, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5951488. Throughput: 0: 188.0. Samples: 1489746. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:56:05,574][00196] Avg episode reward: [(0, '26.501')] [2025-02-13 17:56:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5955584. Throughput: 0: 187.9. Samples: 1490706. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:56:10,578][00196] Avg episode reward: [(0, '26.399')] [2025-02-13 17:56:15,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 763.7). Total num frames: 5959680. Throughput: 0: 184.8. Samples: 1491438. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:56:15,573][00196] Avg episode reward: [(0, '26.987')] [2025-02-13 17:56:20,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 763.7). Total num frames: 5963776. Throughput: 0: 185.0. Samples: 1492620. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:56:20,572][00196] Avg episode reward: [(0, '26.243')] [2025-02-13 17:56:25,569][00196] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 5967872. Throughput: 0: 187.0. Samples: 1493524. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:56:25,574][00196] Avg episode reward: [(0, '25.429')] [2025-02-13 17:56:30,569][00196] Fps is (10 sec: 409.6, 60 sec: 682.7, 300 sec: 749.8). Total num frames: 5967872. Throughput: 0: 187.7. Samples: 1494160. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:56:30,582][00196] Avg episode reward: [(0, '25.358')] [2025-02-13 17:56:35,571][00196] Fps is (10 sec: 819.0, 60 sec: 819.2, 300 sec: 763.7). Total num frames: 5976064. Throughput: 0: 186.1. Samples: 1495648. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:56:35,583][00196] Avg episode reward: [(0, '25.358')] [2025-02-13 17:56:40,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5976064. Throughput: 0: 192.6. Samples: 1496638. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:56:40,572][00196] Avg episode reward: [(0, '24.914')] [2025-02-13 17:56:41,375][10139] Updated weights for policy 0, policy_version 1460 (0.1413) [2025-02-13 17:56:45,569][00196] Fps is (10 sec: 409.7, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5980160. Throughput: 0: 192.4. Samples: 1497090. Policy #0 lag: (min: 1.0, avg: 1.5, max: 2.0) [2025-02-13 17:56:45,581][00196] Avg episode reward: [(0, '24.377')] [2025-02-13 17:56:50,570][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5984256. Throughput: 0: 194.7. Samples: 1498508. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:56:50,583][00196] Avg episode reward: [(0, '24.287')] [2025-02-13 17:56:55,569][00196] Fps is (10 sec: 819.2, 60 sec: 751.0, 300 sec: 749.8). Total num frames: 5988352. Throughput: 0: 198.1. Samples: 1499622. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:56:55,574][00196] Avg episode reward: [(0, '24.243')] [2025-02-13 17:57:00,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5992448. Throughput: 0: 187.2. Samples: 1499862. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:57:00,582][00196] Avg episode reward: [(0, '24.010')] [2025-02-13 17:57:05,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 5996544. Throughput: 0: 191.5. Samples: 1501236. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:57:05,581][00196] Avg episode reward: [(0, '23.560')] [2025-02-13 17:57:10,569][00196] Fps is (10 sec: 819.2, 60 sec: 750.9, 300 sec: 749.8). Total num frames: 6000640. Throughput: 0: 195.6. Samples: 1502328. Policy #0 lag: (min: 1.0, avg: 1.6, max: 2.0) [2025-02-13 17:57:10,580][00196] Avg episode reward: [(0, '23.617')] [2025-02-13 17:57:14,539][10122] Stopping Batcher_0... [2025-02-13 17:57:14,540][10122] Loop batcher_evt_loop terminating... [2025-02-13 17:57:14,552][00196] Component Batcher_0 stopped! [2025-02-13 17:57:15,109][00196] Component RolloutWorker_w4 stopped! [2025-02-13 17:57:15,122][10142] Stopping RolloutWorker_w2... [2025-02-13 17:57:15,109][10143] Stopping RolloutWorker_w4... [2025-02-13 17:57:15,124][10143] Loop rollout_proc4_evt_loop terminating... [2025-02-13 17:57:15,123][00196] Component RolloutWorker_w2 stopped! [2025-02-13 17:57:15,143][10140] Stopping RolloutWorker_w0... [2025-02-13 17:57:15,143][00196] Component RolloutWorker_w0 stopped! [2025-02-13 17:57:15,166][10142] Loop rollout_proc2_evt_loop terminating... [2025-02-13 17:57:15,178][10146] Stopping RolloutWorker_w6... [2025-02-13 17:57:15,178][00196] Component RolloutWorker_w6 stopped! [2025-02-13 17:57:15,193][10140] Loop rollout_proc0_evt_loop terminating... [2025-02-13 17:57:15,206][10146] Loop rollout_proc6_evt_loop terminating... [2025-02-13 17:57:15,294][00196] Component RolloutWorker_w7 stopped! [2025-02-13 17:57:15,306][00196] Component RolloutWorker_w1 stopped! [2025-02-13 17:57:15,322][00196] Component RolloutWorker_w5 stopped! [2025-02-13 17:57:15,329][10145] Stopping RolloutWorker_w5... [2025-02-13 17:57:15,330][10145] Loop rollout_proc5_evt_loop terminating... [2025-02-13 17:57:15,299][10147] Stopping RolloutWorker_w7... [2025-02-13 17:57:15,334][10147] Loop rollout_proc7_evt_loop terminating... [2025-02-13 17:57:15,314][10141] Stopping RolloutWorker_w1... [2025-02-13 17:57:15,346][10141] Loop rollout_proc1_evt_loop terminating... [2025-02-13 17:57:15,356][00196] Component RolloutWorker_w3 stopped! [2025-02-13 17:57:15,363][10144] Stopping RolloutWorker_w3... [2025-02-13 17:57:15,376][10144] Loop rollout_proc3_evt_loop terminating... [2025-02-13 17:57:15,952][10139] Weights refcount: 2 0 [2025-02-13 17:57:15,968][00196] Component InferenceWorker_p0-w0 stopped! [2025-02-13 17:57:15,972][10139] Stopping InferenceWorker_p0-w0... [2025-02-13 17:57:15,978][10139] Loop inference_proc0-0_evt_loop terminating... [2025-02-13 17:57:21,065][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001467_6008832.pth... [2025-02-13 17:57:21,170][10122] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001429_5853184.pth [2025-02-13 17:57:21,190][10122] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001467_6008832.pth... [2025-02-13 17:57:21,369][00196] Component LearnerWorker_p0 stopped! [2025-02-13 17:57:21,372][00196] Waiting for process learner_proc0 to stop... [2025-02-13 17:57:21,368][10122] Stopping LearnerWorker_p0... [2025-02-13 17:57:21,382][10122] Loop learner_proc0_evt_loop terminating... [2025-02-13 17:57:23,990][00196] Waiting for process inference_proc0-0 to join... [2025-02-13 17:57:23,993][00196] Waiting for process rollout_proc0 to join... [2025-02-13 17:57:24,178][00196] Waiting for process rollout_proc1 to join... [2025-02-13 17:57:24,180][00196] Waiting for process rollout_proc2 to join... [2025-02-13 17:57:24,183][00196] Waiting for process rollout_proc3 to join... [2025-02-13 17:57:24,185][00196] Waiting for process rollout_proc4 to join... [2025-02-13 17:57:24,187][00196] Waiting for process rollout_proc5 to join... [2025-02-13 17:57:24,189][00196] Waiting for process rollout_proc6 to join... [2025-02-13 17:57:24,194][00196] Waiting for process rollout_proc7 to join... [2025-02-13 17:57:24,196][00196] Batcher 0 profile tree view: batching: 29.0893, releasing_batches: 0.5114 [2025-02-13 17:57:24,200][00196] InferenceWorker_p0-w0 profile tree view: wait_policy: 0.0052 wait_policy_total: 70.9126 update_model: 236.0580 weight_update: 0.3708 one_step: 0.0765 handle_policy_step: 4718.4245 deserialize: 177.6020, stack: 27.6225, obs_to_device_normalize: 752.8091, forward: 3469.5439, send_messages: 105.2626 prepare_outputs: 54.9310 to_cpu: 5.9625 [2025-02-13 17:57:24,202][00196] Learner 0 profile tree view: misc: 0.0097, prepare_batch: 1947.0085 train: 5586.3377 epoch_init: 0.0141, minibatch_init: 0.0226, losses_postprocess: 0.2608, kl_divergence: 0.9562, after_optimizer: 4.5328 calculate_losses: 2676.2583 losses_init: 0.0075, forward_head: 2404.0181, bptt_initial: 7.7582, tail: 5.7856, advantages_returns: 0.4055, losses: 2.6864 bptt: 254.7405 bptt_forward_core: 253.3419 update: 2902.9209 clip: 6.2896 [2025-02-13 17:57:24,204][00196] RolloutWorker_w0 profile tree view: wait_for_trajectories: 1.2430, enqueue_policy_requests: 105.6150, env_step: 2472.2683, overhead: 60.5900, complete_rollouts: 26.9002 save_policy_outputs: 45.2758 split_output_tensors: 17.1316 [2025-02-13 17:57:24,206][00196] RolloutWorker_w7 profile tree view: wait_for_trajectories: 1.2473, enqueue_policy_requests: 103.5635, env_step: 2488.6845, overhead: 58.7221, complete_rollouts: 27.3805 save_policy_outputs: 45.9249 split_output_tensors: 18.8972 [2025-02-13 17:57:24,207][00196] Loop Runner_EvtLoop terminating... [2025-02-13 17:57:24,209][00196] Runner profile tree view: main_loop: 7649.4301 [2025-02-13 17:57:24,211][00196] Collected {0: 6008832}, FPS: 785.5 [2025-02-13 18:02:34,394][00196] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2025-02-13 18:02:34,397][00196] Overriding arg 'num_workers' with value 1 passed from command line [2025-02-13 18:02:34,403][00196] Adding new argument 'no_render'=True that is not in the saved config file! [2025-02-13 18:02:34,405][00196] Adding new argument 'save_video'=True that is not in the saved config file! [2025-02-13 18:02:34,409][00196] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2025-02-13 18:02:34,414][00196] Adding new argument 'video_name'=None that is not in the saved config file! [2025-02-13 18:02:34,416][00196] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2025-02-13 18:02:34,418][00196] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2025-02-13 18:02:34,420][00196] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2025-02-13 18:02:34,421][00196] Adding new argument 'hf_repository'=None that is not in the saved config file! [2025-02-13 18:02:34,423][00196] Adding new argument 'policy_index'=0 that is not in the saved config file! [2025-02-13 18:02:34,427][00196] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2025-02-13 18:02:34,429][00196] Adding new argument 'train_script'=None that is not in the saved config file! [2025-02-13 18:02:34,430][00196] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2025-02-13 18:02:34,434][00196] Using frameskip 1 and render_action_repeat=4 for evaluation [2025-02-13 18:02:34,493][00196] Doom resolution: 160x120, resize resolution: (128, 72) [2025-02-13 18:02:34,502][00196] RunningMeanStd input shape: (3, 72, 128) [2025-02-13 18:02:34,514][00196] RunningMeanStd input shape: (1,) [2025-02-13 18:02:34,577][00196] ConvEncoder: input_channels=3 [2025-02-13 18:02:34,786][00196] Conv encoder output size: 512 [2025-02-13 18:02:34,789][00196] Policy head output size: 512 [2025-02-13 18:02:34,819][00196] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001467_6008832.pth... [2025-02-13 18:02:35,610][00196] Num frames 100... [2025-02-13 18:02:35,844][00196] Num frames 200... [2025-02-13 18:02:36,110][00196] Num frames 300... [2025-02-13 18:02:36,409][00196] Num frames 400... [2025-02-13 18:02:36,702][00196] Num frames 500... [2025-02-13 18:02:36,987][00196] Num frames 600... [2025-02-13 18:02:37,233][00196] Avg episode rewards: #0: 14.710, true rewards: #0: 6.710 [2025-02-13 18:02:37,235][00196] Avg episode reward: 14.710, avg true_objective: 6.710 [2025-02-13 18:02:37,306][00196] Num frames 700... [2025-02-13 18:02:37,554][00196] Num frames 800... [2025-02-13 18:02:37,789][00196] Num frames 900... [2025-02-13 18:02:38,021][00196] Num frames 1000... [2025-02-13 18:02:38,278][00196] Num frames 1100... [2025-02-13 18:02:38,509][00196] Num frames 1200... [2025-02-13 18:02:38,746][00196] Num frames 1300... [2025-02-13 18:02:38,983][00196] Num frames 1400... [2025-02-13 18:02:39,232][00196] Num frames 1500... [2025-02-13 18:02:39,470][00196] Num frames 1600... [2025-02-13 18:02:39,714][00196] Num frames 1700... [2025-02-13 18:02:39,952][00196] Num frames 1800... [2025-02-13 18:02:40,208][00196] Num frames 1900... [2025-02-13 18:02:40,488][00196] Num frames 2000... [2025-02-13 18:02:40,813][00196] Num frames 2100... [2025-02-13 18:02:41,114][00196] Num frames 2200... [2025-02-13 18:02:41,446][00196] Num frames 2300... [2025-02-13 18:02:41,721][00196] Avg episode rewards: #0: 29.855, true rewards: #0: 11.855 [2025-02-13 18:02:41,724][00196] Avg episode reward: 29.855, avg true_objective: 11.855 [2025-02-13 18:02:41,816][00196] Num frames 2400... [2025-02-13 18:02:42,136][00196] Num frames 2500... [2025-02-13 18:02:42,482][00196] Num frames 2600... [2025-02-13 18:02:42,814][00196] Num frames 2700... [2025-02-13 18:02:43,104][00196] Num frames 2800... [2025-02-13 18:02:43,351][00196] Num frames 2900... [2025-02-13 18:02:43,517][00196] Avg episode rewards: #0: 24.157, true rewards: #0: 9.823 [2025-02-13 18:02:43,524][00196] Avg episode reward: 24.157, avg true_objective: 9.823 [2025-02-13 18:02:43,650][00196] Num frames 3000... [2025-02-13 18:02:43,899][00196] Num frames 3100... [2025-02-13 18:02:44,134][00196] Num frames 3200... [2025-02-13 18:02:44,392][00196] Num frames 3300... [2025-02-13 18:02:44,644][00196] Num frames 3400... [2025-02-13 18:02:44,885][00196] Num frames 3500... [2025-02-13 18:02:45,117][00196] Num frames 3600... [2025-02-13 18:02:45,340][00196] Avg episode rewards: #0: 21.680, true rewards: #0: 9.180 [2025-02-13 18:02:45,343][00196] Avg episode reward: 21.680, avg true_objective: 9.180 [2025-02-13 18:02:45,416][00196] Num frames 3700... [2025-02-13 18:02:45,646][00196] Num frames 3800... [2025-02-13 18:02:45,880][00196] Num frames 3900... [2025-02-13 18:02:46,100][00196] Num frames 4000... [2025-02-13 18:02:46,333][00196] Num frames 4100... [2025-02-13 18:02:46,573][00196] Num frames 4200... [2025-02-13 18:02:46,815][00196] Avg episode rewards: #0: 19.560, true rewards: #0: 8.560 [2025-02-13 18:02:46,816][00196] Avg episode reward: 19.560, avg true_objective: 8.560 [2025-02-13 18:02:46,867][00196] Num frames 4300... [2025-02-13 18:02:47,102][00196] Num frames 4400... [2025-02-13 18:02:47,358][00196] Num frames 4500... [2025-02-13 18:02:47,651][00196] Num frames 4600... [2025-02-13 18:02:47,928][00196] Num frames 4700... [2025-02-13 18:02:48,218][00196] Num frames 4800... [2025-02-13 18:02:48,484][00196] Num frames 4900... [2025-02-13 18:02:48,736][00196] Num frames 5000... [2025-02-13 18:02:48,990][00196] Num frames 5100... [2025-02-13 18:02:49,234][00196] Num frames 5200... [2025-02-13 18:02:49,531][00196] Num frames 5300... [2025-02-13 18:02:49,754][00196] Avg episode rewards: #0: 20.590, true rewards: #0: 8.923 [2025-02-13 18:02:49,758][00196] Avg episode reward: 20.590, avg true_objective: 8.923 [2025-02-13 18:02:49,899][00196] Num frames 5400... [2025-02-13 18:02:50,193][00196] Num frames 5500... [2025-02-13 18:02:50,468][00196] Num frames 5600... [2025-02-13 18:02:50,712][00196] Num frames 5700... [2025-02-13 18:02:50,943][00196] Num frames 5800... [2025-02-13 18:02:51,182][00196] Num frames 5900... [2025-02-13 18:02:51,426][00196] Num frames 6000... [2025-02-13 18:02:51,666][00196] Num frames 6100... [2025-02-13 18:02:51,903][00196] Num frames 6200... [2025-02-13 18:02:52,135][00196] Num frames 6300... [2025-02-13 18:02:52,376][00196] Avg episode rewards: #0: 20.540, true rewards: #0: 9.111 [2025-02-13 18:02:52,378][00196] Avg episode reward: 20.540, avg true_objective: 9.111 [2025-02-13 18:02:52,435][00196] Num frames 6400... [2025-02-13 18:02:52,670][00196] Num frames 6500... [2025-02-13 18:02:52,898][00196] Num frames 6600... [2025-02-13 18:02:53,169][00196] Num frames 6700... [2025-02-13 18:02:53,487][00196] Num frames 6800... [2025-02-13 18:02:53,808][00196] Num frames 6900... [2025-02-13 18:02:54,117][00196] Num frames 7000... [2025-02-13 18:02:54,430][00196] Num frames 7100... [2025-02-13 18:02:54,843][00196] Num frames 7200... [2025-02-13 18:02:55,234][00196] Num frames 7300... [2025-02-13 18:02:55,601][00196] Num frames 7400... [2025-02-13 18:02:55,669][00196] Avg episode rewards: #0: 20.628, true rewards: #0: 9.252 [2025-02-13 18:02:55,672][00196] Avg episode reward: 20.628, avg true_objective: 9.252 [2025-02-13 18:02:55,933][00196] Num frames 7500... [2025-02-13 18:02:56,172][00196] Num frames 7600... [2025-02-13 18:02:56,408][00196] Num frames 7700... [2025-02-13 18:02:56,646][00196] Num frames 7800... [2025-02-13 18:02:56,906][00196] Num frames 7900... [2025-02-13 18:02:57,156][00196] Num frames 8000... [2025-02-13 18:02:57,401][00196] Num frames 8100... [2025-02-13 18:02:57,630][00196] Num frames 8200... [2025-02-13 18:02:57,882][00196] Num frames 8300... [2025-02-13 18:02:57,975][00196] Avg episode rewards: #0: 20.682, true rewards: #0: 9.238 [2025-02-13 18:02:57,977][00196] Avg episode reward: 20.682, avg true_objective: 9.238 [2025-02-13 18:02:58,173][00196] Num frames 8400... [2025-02-13 18:02:58,427][00196] Num frames 8500... [2025-02-13 18:02:58,692][00196] Num frames 8600... [2025-02-13 18:02:58,984][00196] Num frames 8700... [2025-02-13 18:02:59,257][00196] Num frames 8800... [2025-02-13 18:02:59,545][00196] Num frames 8900... [2025-02-13 18:02:59,779][00196] Num frames 9000... [2025-02-13 18:03:00,038][00196] Num frames 9100... [2025-02-13 18:03:00,277][00196] Num frames 9200... [2025-02-13 18:03:00,506][00196] Num frames 9300... [2025-02-13 18:03:00,761][00196] Num frames 9400... [2025-02-13 18:03:01,047][00196] Num frames 9500... [2025-02-13 18:03:01,328][00196] Num frames 9600... [2025-02-13 18:03:01,596][00196] Num frames 9700... [2025-02-13 18:03:01,829][00196] Num frames 9800... [2025-02-13 18:03:02,091][00196] Num frames 9900... [2025-02-13 18:03:02,342][00196] Num frames 10000... [2025-02-13 18:03:02,605][00196] Num frames 10100... [2025-02-13 18:03:02,821][00196] Avg episode rewards: #0: 22.863, true rewards: #0: 10.163 [2025-02-13 18:03:02,824][00196] Avg episode reward: 22.863, avg true_objective: 10.163 [2025-02-13 18:04:20,426][00196] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2025-02-13 18:05:47,126][00196] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2025-02-13 18:05:47,130][00196] Overriding arg 'num_workers' with value 1 passed from command line [2025-02-13 18:05:47,132][00196] Adding new argument 'no_render'=True that is not in the saved config file! [2025-02-13 18:05:47,134][00196] Adding new argument 'save_video'=True that is not in the saved config file! [2025-02-13 18:05:47,135][00196] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2025-02-13 18:05:47,137][00196] Adding new argument 'video_name'=None that is not in the saved config file! [2025-02-13 18:05:47,139][00196] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! [2025-02-13 18:05:47,142][00196] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2025-02-13 18:05:47,146][00196] Adding new argument 'push_to_hub'=True that is not in the saved config file! [2025-02-13 18:05:47,147][00196] Adding new argument 'hf_repository'='DiurD/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! [2025-02-13 18:05:47,149][00196] Adding new argument 'policy_index'=0 that is not in the saved config file! [2025-02-13 18:05:47,151][00196] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2025-02-13 18:05:47,152][00196] Adding new argument 'train_script'=None that is not in the saved config file! [2025-02-13 18:05:47,157][00196] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2025-02-13 18:05:47,158][00196] Using frameskip 1 and render_action_repeat=4 for evaluation [2025-02-13 18:05:47,220][00196] RunningMeanStd input shape: (3, 72, 128) [2025-02-13 18:05:47,223][00196] RunningMeanStd input shape: (1,) [2025-02-13 18:05:47,248][00196] ConvEncoder: input_channels=3 [2025-02-13 18:05:47,329][00196] Conv encoder output size: 512 [2025-02-13 18:05:47,333][00196] Policy head output size: 512 [2025-02-13 18:05:47,369][00196] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001467_6008832.pth... [2025-02-13 18:05:48,268][00196] Num frames 100... [2025-02-13 18:05:48,605][00196] Num frames 200... [2025-02-13 18:05:48,916][00196] Num frames 300... [2025-02-13 18:05:49,243][00196] Num frames 400... [2025-02-13 18:05:49,567][00196] Num frames 500... [2025-02-13 18:05:49,877][00196] Num frames 600... [2025-02-13 18:05:50,109][00196] Num frames 700... [2025-02-13 18:05:50,344][00196] Num frames 800... [2025-02-13 18:05:50,582][00196] Num frames 900... [2025-02-13 18:05:50,873][00196] Num frames 1000... [2025-02-13 18:05:51,141][00196] Num frames 1100... [2025-02-13 18:05:51,416][00196] Num frames 1200... [2025-02-13 18:05:51,673][00196] Num frames 1300... [2025-02-13 18:05:51,907][00196] Num frames 1400... [2025-02-13 18:05:52,140][00196] Num frames 1500... [2025-02-13 18:05:52,377][00196] Avg episode rewards: #0: 40.799, true rewards: #0: 15.800 [2025-02-13 18:05:52,380][00196] Avg episode reward: 40.799, avg true_objective: 15.800 [2025-02-13 18:05:52,433][00196] Num frames 1600... [2025-02-13 18:05:52,663][00196] Num frames 1700... [2025-02-13 18:05:52,909][00196] Num frames 1800... [2025-02-13 18:05:53,139][00196] Num frames 1900... [2025-02-13 18:05:53,379][00196] Num frames 2000... [2025-02-13 18:05:53,609][00196] Num frames 2100... [2025-02-13 18:05:53,848][00196] Num frames 2200... [2025-02-13 18:05:54,078][00196] Num frames 2300... [2025-02-13 18:05:54,317][00196] Num frames 2400... [2025-02-13 18:05:54,552][00196] Num frames 2500... [2025-02-13 18:05:54,801][00196] Num frames 2600... [2025-02-13 18:05:55,077][00196] Num frames 2700... [2025-02-13 18:05:55,346][00196] Num frames 2800... [2025-02-13 18:05:55,571][00196] Avg episode rewards: #0: 36.300, true rewards: #0: 14.300 [2025-02-13 18:05:55,574][00196] Avg episode reward: 36.300, avg true_objective: 14.300 [2025-02-13 18:05:55,678][00196] Num frames 2900... [2025-02-13 18:05:55,960][00196] Num frames 3000... [2025-02-13 18:05:56,222][00196] Num frames 3100... [2025-02-13 18:05:56,486][00196] Num frames 3200... [2025-02-13 18:05:56,745][00196] Num frames 3300... [2025-02-13 18:05:56,990][00196] Num frames 3400... [2025-02-13 18:05:57,238][00196] Num frames 3500... [2025-02-13 18:05:57,501][00196] Num frames 3600... [2025-02-13 18:05:57,745][00196] Num frames 3700... [2025-02-13 18:05:57,991][00196] Num frames 3800... [2025-02-13 18:05:58,168][00196] Avg episode rewards: #0: 32.173, true rewards: #0: 12.840 [2025-02-13 18:05:58,170][00196] Avg episode reward: 32.173, avg true_objective: 12.840 [2025-02-13 18:05:58,292][00196] Num frames 3900... [2025-02-13 18:05:58,529][00196] Num frames 4000... [2025-02-13 18:05:58,754][00196] Num frames 4100... [2025-02-13 18:05:59,029][00196] Num frames 4200... [2025-02-13 18:05:59,295][00196] Num frames 4300... [2025-02-13 18:05:59,573][00196] Num frames 4400... [2025-02-13 18:05:59,868][00196] Num frames 4500... [2025-02-13 18:06:00,192][00196] Num frames 4600... [2025-02-13 18:06:00,509][00196] Num frames 4700... [2025-02-13 18:06:00,809][00196] Num frames 4800... [2025-02-13 18:06:01,134][00196] Num frames 4900... [2025-02-13 18:06:01,432][00196] Num frames 5000... [2025-02-13 18:06:01,756][00196] Num frames 5100... [2025-02-13 18:06:02,068][00196] Num frames 5200... [2025-02-13 18:06:02,394][00196] Num frames 5300... [2025-02-13 18:06:02,659][00196] Num frames 5400... [2025-02-13 18:06:02,897][00196] Num frames 5500... [2025-02-13 18:06:03,130][00196] Num frames 5600... [2025-02-13 18:06:03,300][00196] Avg episode rewards: #0: 34.610, true rewards: #0: 14.110 [2025-02-13 18:06:03,302][00196] Avg episode reward: 34.610, avg true_objective: 14.110 [2025-02-13 18:06:03,434][00196] Num frames 5700... [2025-02-13 18:06:03,681][00196] Num frames 5800... [2025-02-13 18:06:03,923][00196] Num frames 5900... [2025-02-13 18:06:04,188][00196] Num frames 6000... [2025-02-13 18:06:04,469][00196] Num frames 6100... [2025-02-13 18:06:04,730][00196] Num frames 6200... [2025-02-13 18:06:04,986][00196] Num frames 6300... [2025-02-13 18:06:05,232][00196] Num frames 6400... [2025-02-13 18:06:05,466][00196] Num frames 6500... [2025-02-13 18:06:05,691][00196] Num frames 6600... [2025-02-13 18:06:05,879][00196] Avg episode rewards: #0: 32.116, true rewards: #0: 13.316 [2025-02-13 18:06:05,881][00196] Avg episode reward: 32.116, avg true_objective: 13.316 [2025-02-13 18:06:05,995][00196] Num frames 6700... [2025-02-13 18:06:06,260][00196] Num frames 6800... [2025-02-13 18:06:06,530][00196] Num frames 6900... [2025-02-13 18:06:06,795][00196] Num frames 7000... [2025-02-13 18:06:07,054][00196] Num frames 7100... [2025-02-13 18:06:07,283][00196] Num frames 7200... [2025-02-13 18:06:07,520][00196] Num frames 7300... [2025-02-13 18:06:07,751][00196] Num frames 7400... [2025-02-13 18:06:07,983][00196] Num frames 7500... [2025-02-13 18:06:08,240][00196] Avg episode rewards: #0: 29.810, true rewards: #0: 12.643 [2025-02-13 18:06:08,242][00196] Avg episode reward: 29.810, avg true_objective: 12.643 [2025-02-13 18:06:08,275][00196] Num frames 7600... [2025-02-13 18:06:08,510][00196] Num frames 7700... [2025-02-13 18:06:08,745][00196] Num frames 7800... [2025-02-13 18:06:08,980][00196] Num frames 7900... [2025-02-13 18:06:09,254][00196] Num frames 8000... [2025-02-13 18:06:09,547][00196] Num frames 8100... [2025-02-13 18:06:09,823][00196] Num frames 8200... [2025-02-13 18:06:10,100][00196] Num frames 8300... [2025-02-13 18:06:10,379][00196] Num frames 8400... [2025-02-13 18:06:10,661][00196] Num frames 8500... [2025-02-13 18:06:10,944][00196] Num frames 8600... [2025-02-13 18:06:11,200][00196] Num frames 8700... [2025-02-13 18:06:11,452][00196] Num frames 8800... [2025-02-13 18:06:11,698][00196] Num frames 8900... [2025-02-13 18:06:11,943][00196] Num frames 9000... [2025-02-13 18:06:12,174][00196] Num frames 9100... [2025-02-13 18:06:12,409][00196] Num frames 9200... [2025-02-13 18:06:12,727][00196] Num frames 9300... [2025-02-13 18:06:12,835][00196] Avg episode rewards: #0: 31.734, true rewards: #0: 13.306 [2025-02-13 18:06:12,837][00196] Avg episode reward: 31.734, avg true_objective: 13.306 [2025-02-13 18:06:13,092][00196] Num frames 9400... [2025-02-13 18:06:13,429][00196] Num frames 9500... [2025-02-13 18:06:13,749][00196] Num frames 9600... [2025-02-13 18:06:14,047][00196] Num frames 9700... [2025-02-13 18:06:14,403][00196] Num frames 9800... [2025-02-13 18:06:14,764][00196] Num frames 9900... [2025-02-13 18:06:15,102][00196] Num frames 10000... [2025-02-13 18:06:15,314][00196] Avg episode rewards: #0: 29.812, true rewards: #0: 12.562 [2025-02-13 18:06:15,316][00196] Avg episode reward: 29.812, avg true_objective: 12.562 [2025-02-13 18:06:15,436][00196] Num frames 10100... [2025-02-13 18:06:15,658][00196] Num frames 10200... [2025-02-13 18:06:15,902][00196] Num frames 10300... [2025-02-13 18:06:16,132][00196] Num frames 10400... [2025-02-13 18:06:16,371][00196] Num frames 10500... [2025-02-13 18:06:16,598][00196] Num frames 10600... [2025-02-13 18:06:16,843][00196] Num frames 10700... [2025-02-13 18:06:17,068][00196] Num frames 10800... [2025-02-13 18:06:17,299][00196] Num frames 10900... [2025-02-13 18:06:17,540][00196] Num frames 11000... [2025-02-13 18:06:17,769][00196] Num frames 11100... [2025-02-13 18:06:18,015][00196] Num frames 11200... [2025-02-13 18:06:18,264][00196] Num frames 11300... [2025-02-13 18:06:18,543][00196] Num frames 11400... [2025-02-13 18:06:18,840][00196] Num frames 11500... [2025-02-13 18:06:19,114][00196] Num frames 11600... [2025-02-13 18:06:19,389][00196] Num frames 11700... [2025-02-13 18:06:19,668][00196] Num frames 11800... [2025-02-13 18:06:19,975][00196] Num frames 11900... [2025-02-13 18:06:20,248][00196] Num frames 12000... [2025-02-13 18:06:20,486][00196] Num frames 12100... [2025-02-13 18:06:20,658][00196] Avg episode rewards: #0: 33.278, true rewards: #0: 13.500 [2025-02-13 18:06:20,661][00196] Avg episode reward: 33.278, avg true_objective: 13.500 [2025-02-13 18:06:20,775][00196] Num frames 12200... [2025-02-13 18:06:21,027][00196] Num frames 12300... [2025-02-13 18:06:21,272][00196] Num frames 12400... [2025-02-13 18:06:21,564][00196] Num frames 12500... [2025-02-13 18:06:21,837][00196] Num frames 12600... [2025-02-13 18:06:22,125][00196] Num frames 12700... [2025-02-13 18:06:22,396][00196] Num frames 12800... [2025-02-13 18:06:22,630][00196] Num frames 12900... [2025-02-13 18:06:22,854][00196] Num frames 13000... [2025-02-13 18:06:23,104][00196] Avg episode rewards: #0: 32.078, true rewards: #0: 13.078 [2025-02-13 18:06:23,106][00196] Avg episode reward: 32.078, avg true_objective: 13.078 [2025-02-13 18:08:08,991][00196] Replay video saved to /content/train_dir/default_experiment/replay.mp4!