|
[2025-08-17 18:54:22,730][03365] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-08-17 18:54:22,732][03365] Rollout worker 0 uses device cpu |
|
[2025-08-17 18:54:22,733][03365] Rollout worker 1 uses device cpu |
|
[2025-08-17 18:54:22,734][03365] Rollout worker 2 uses device cpu |
|
[2025-08-17 18:54:22,735][03365] Rollout worker 3 uses device cpu |
|
[2025-08-17 18:54:22,736][03365] Rollout worker 4 uses device cpu |
|
[2025-08-17 18:54:22,737][03365] Rollout worker 5 uses device cpu |
|
[2025-08-17 18:54:22,738][03365] Rollout worker 6 uses device cpu |
|
[2025-08-17 18:54:22,739][03365] Rollout worker 7 uses device cpu |
|
[2025-08-17 18:54:22,884][03365] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:54:22,885][03365] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-08-17 18:54:22,914][03365] Starting all processes... |
|
[2025-08-17 18:54:22,915][03365] Starting process learner_proc0 |
|
[2025-08-17 18:54:22,918][03365] EvtLoop [Runner_EvtLoop, process=main process 3365] unhandled exception in slot='_on_start' connected to emitter=Emitter(object_id='Runner_EvtLoop', signal_name='start'), args=() |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal |
|
slot_callable(*args) |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start |
|
self._start_processes() |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes |
|
p.start() |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 515, in start |
|
self._process.start() |
|
File "/usr/lib/python3.11/multiprocessing/process.py", line 121, in start |
|
self._popen = self._Popen(self) |
|
^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/context.py", line 288, in _Popen |
|
return Popen(process_obj) |
|
^^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 32, in __init__ |
|
super().__init__(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 19, in __init__ |
|
self._launch(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 47, in _launch |
|
reduction.dump(process_obj, fp) |
|
File "/usr/lib/python3.11/multiprocessing/reduction.py", line 60, in dump |
|
ForkingPickler(file, protocol).dump(obj) |
|
TypeError: cannot pickle 'TLSBuffer' object |
|
[2025-08-17 18:54:22,923][03365] Unhandled exception cannot pickle 'TLSBuffer' object in evt loop Runner_EvtLoop |
|
[2025-08-17 18:54:22,924][03365] Uncaught exception in Runner evt loop |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner.py", line 770, in run |
|
evt_loop_status = self.event_loop.exec() |
|
^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 403, in exec |
|
raise exc |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 399, in exec |
|
while self._loop_iteration(): |
|
^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 383, in _loop_iteration |
|
self._process_signal(s) |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 358, in _process_signal |
|
raise exc |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal |
|
slot_callable(*args) |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start |
|
self._start_processes() |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes |
|
p.start() |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 515, in start |
|
self._process.start() |
|
File "/usr/lib/python3.11/multiprocessing/process.py", line 121, in start |
|
self._popen = self._Popen(self) |
|
^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/context.py", line 288, in _Popen |
|
return Popen(process_obj) |
|
^^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 32, in __init__ |
|
super().__init__(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 19, in __init__ |
|
self._launch(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 47, in _launch |
|
reduction.dump(process_obj, fp) |
|
File "/usr/lib/python3.11/multiprocessing/reduction.py", line 60, in dump |
|
ForkingPickler(file, protocol).dump(obj) |
|
TypeError: cannot pickle 'TLSBuffer' object |
|
[2025-08-17 18:54:22,927][03365] Runner profile tree view: |
|
main_loop: 0.0128 |
|
[2025-08-17 18:54:22,928][03365] Collected {}, FPS: 0.0 |
|
[2025-08-17 18:54:28,037][03365] Environment doom_basic already registered, overwriting... |
|
[2025-08-17 18:54:28,038][03365] Environment doom_two_colors_easy already registered, overwriting... |
|
[2025-08-17 18:54:28,039][03365] Environment doom_two_colors_hard already registered, overwriting... |
|
[2025-08-17 18:54:28,040][03365] Environment doom_dm already registered, overwriting... |
|
[2025-08-17 18:54:28,040][03365] Environment doom_dwango5 already registered, overwriting... |
|
[2025-08-17 18:54:28,041][03365] Environment doom_my_way_home_flat_actions already registered, overwriting... |
|
[2025-08-17 18:54:28,042][03365] Environment doom_defend_the_center_flat_actions already registered, overwriting... |
|
[2025-08-17 18:54:28,043][03365] Environment doom_my_way_home already registered, overwriting... |
|
[2025-08-17 18:54:28,043][03365] Environment doom_deadly_corridor already registered, overwriting... |
|
[2025-08-17 18:54:28,044][03365] Environment doom_defend_the_center already registered, overwriting... |
|
[2025-08-17 18:54:28,046][03365] Environment doom_defend_the_line already registered, overwriting... |
|
[2025-08-17 18:54:28,046][03365] Environment doom_health_gathering already registered, overwriting... |
|
[2025-08-17 18:54:28,047][03365] Environment doom_health_gathering_supreme already registered, overwriting... |
|
[2025-08-17 18:54:28,049][03365] Environment doom_battle already registered, overwriting... |
|
[2025-08-17 18:54:28,050][03365] Environment doom_battle2 already registered, overwriting... |
|
[2025-08-17 18:54:28,051][03365] Environment doom_duel_bots already registered, overwriting... |
|
[2025-08-17 18:54:28,051][03365] Environment doom_deathmatch_bots already registered, overwriting... |
|
[2025-08-17 18:54:28,052][03365] Environment doom_duel already registered, overwriting... |
|
[2025-08-17 18:54:28,053][03365] Environment doom_deathmatch_full already registered, overwriting... |
|
[2025-08-17 18:54:28,053][03365] Environment doom_benchmark already registered, overwriting... |
|
[2025-08-17 18:54:28,054][03365] register_encoder_factory: <function make_vizdoom_encoder at 0x7f3bfaf5fa60> |
|
[2025-08-17 18:54:28,068][03365] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 18:54:28,072][03365] Experiment dir /content/train_dir/default_experiment already exists! |
|
[2025-08-17 18:54:28,073][03365] Resuming existing experiment from /content/train_dir/default_experiment... |
|
[2025-08-17 18:54:28,074][03365] Weights and Biases integration disabled |
|
[2025-08-17 18:54:28,076][03365] Environment var CUDA_VISIBLE_DEVICES is 0 |
|
|
|
[2025-08-17 18:54:31,044][03365] Starting experiment with the following configuration: |
|
help=False |
|
algo=APPO |
|
env=doom_health_gathering_supreme |
|
experiment=default_experiment |
|
train_dir=/content/train_dir |
|
restart_behavior=resume |
|
device=gpu |
|
seed=None |
|
num_policies=1 |
|
async_rl=True |
|
serial_mode=False |
|
batched_sampling=False |
|
num_batches_to_accumulate=2 |
|
worker_num_splits=2 |
|
policy_workers_per_policy=1 |
|
max_policy_lag=1000 |
|
num_workers=8 |
|
num_envs_per_worker=4 |
|
batch_size=1024 |
|
num_batches_per_epoch=1 |
|
num_epochs=1 |
|
rollout=32 |
|
recurrence=32 |
|
shuffle_minibatches=False |
|
gamma=0.99 |
|
reward_scale=1.0 |
|
reward_clip=1000.0 |
|
value_bootstrap=False |
|
normalize_returns=True |
|
exploration_loss_coeff=0.001 |
|
value_loss_coeff=0.5 |
|
kl_loss_coeff=0.0 |
|
exploration_loss=symmetric_kl |
|
gae_lambda=0.95 |
|
ppo_clip_ratio=0.1 |
|
ppo_clip_value=0.2 |
|
with_vtrace=False |
|
vtrace_rho=1.0 |
|
vtrace_c=1.0 |
|
optimizer=adam |
|
adam_eps=1e-06 |
|
adam_beta1=0.9 |
|
adam_beta2=0.999 |
|
max_grad_norm=4.0 |
|
learning_rate=0.0001 |
|
lr_schedule=constant |
|
lr_schedule_kl_threshold=0.008 |
|
lr_adaptive_min=1e-06 |
|
lr_adaptive_max=0.01 |
|
obs_subtract_mean=0.0 |
|
obs_scale=255.0 |
|
normalize_input=True |
|
normalize_input_keys=None |
|
decorrelate_experience_max_seconds=0 |
|
decorrelate_envs_on_one_worker=True |
|
actor_worker_gpus=[] |
|
set_workers_cpu_affinity=True |
|
force_envs_single_thread=False |
|
default_niceness=0 |
|
log_to_file=True |
|
experiment_summaries_interval=10 |
|
flush_summaries_interval=30 |
|
stats_avg=100 |
|
summaries_use_frameskip=True |
|
heartbeat_interval=20 |
|
heartbeat_reporting_interval=600 |
|
train_for_env_steps=4000000 |
|
train_for_seconds=10000000000 |
|
save_every_sec=120 |
|
keep_checkpoints=2 |
|
load_checkpoint_kind=latest |
|
save_milestones_sec=-1 |
|
save_best_every_sec=5 |
|
save_best_metric=reward |
|
save_best_after=100000 |
|
benchmark=False |
|
encoder_mlp_layers=[512, 512] |
|
encoder_conv_architecture=convnet_simple |
|
encoder_conv_mlp_layers=[512] |
|
use_rnn=True |
|
rnn_size=512 |
|
rnn_type=gru |
|
rnn_num_layers=1 |
|
decoder_mlp_layers=[] |
|
nonlinearity=elu |
|
policy_initialization=orthogonal |
|
policy_init_gain=1.0 |
|
actor_critic_share_weights=True |
|
adaptive_stddev=True |
|
continuous_tanh_scale=0.0 |
|
initial_stddev=1.0 |
|
use_env_info_cache=False |
|
env_gpu_actions=False |
|
env_gpu_observations=True |
|
env_frameskip=4 |
|
env_framestack=1 |
|
pixel_format=CHW |
|
use_record_episode_statistics=False |
|
with_wandb=False |
|
wandb_user=None |
|
wandb_project=sample_factory |
|
wandb_group=None |
|
wandb_job_type=SF |
|
wandb_tags=[] |
|
with_pbt=False |
|
pbt_mix_policies_in_one_env=True |
|
pbt_period_env_steps=5000000 |
|
pbt_start_mutation=20000000 |
|
pbt_replace_fraction=0.3 |
|
pbt_mutation_rate=0.15 |
|
pbt_replace_reward_gap=0.1 |
|
pbt_replace_reward_gap_absolute=1e-06 |
|
pbt_optimize_gamma=False |
|
pbt_target_objective=true_objective |
|
pbt_perturb_min=1.1 |
|
pbt_perturb_max=1.5 |
|
num_agents=-1 |
|
num_humans=0 |
|
num_bots=-1 |
|
start_bot_difficulty=None |
|
timelimit=None |
|
res_w=128 |
|
res_h=72 |
|
wide_aspect_ratio=False |
|
eval_env_frameskip=1 |
|
fps=35 |
|
command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 |
|
cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} |
|
git_hash=unknown |
|
git_repo_name=not a git repository |
|
[2025-08-17 18:54:31,045][03365] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-08-17 18:54:31,048][03365] Rollout worker 0 uses device cpu |
|
[2025-08-17 18:54:31,049][03365] Rollout worker 1 uses device cpu |
|
[2025-08-17 18:54:31,051][03365] Rollout worker 2 uses device cpu |
|
[2025-08-17 18:54:31,051][03365] Rollout worker 3 uses device cpu |
|
[2025-08-17 18:54:31,053][03365] Rollout worker 4 uses device cpu |
|
[2025-08-17 18:54:31,053][03365] Rollout worker 5 uses device cpu |
|
[2025-08-17 18:54:31,054][03365] Rollout worker 6 uses device cpu |
|
[2025-08-17 18:54:31,055][03365] Rollout worker 7 uses device cpu |
|
[2025-08-17 18:54:31,143][03365] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:54:31,144][03365] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-08-17 18:54:31,177][03365] Starting all processes... |
|
[2025-08-17 18:54:31,177][03365] Starting process learner_proc0 |
|
[2025-08-17 18:54:31,181][03365] EvtLoop [Runner_EvtLoop, process=main process 3365] unhandled exception in slot='_on_start' connected to emitter=Emitter(object_id='Runner_EvtLoop', signal_name='start'), args=() |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal |
|
slot_callable(*args) |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start |
|
self._start_processes() |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes |
|
p.start() |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 515, in start |
|
self._process.start() |
|
File "/usr/lib/python3.11/multiprocessing/process.py", line 121, in start |
|
self._popen = self._Popen(self) |
|
^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/context.py", line 288, in _Popen |
|
return Popen(process_obj) |
|
^^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 32, in __init__ |
|
super().__init__(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 19, in __init__ |
|
self._launch(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 47, in _launch |
|
reduction.dump(process_obj, fp) |
|
File "/usr/lib/python3.11/multiprocessing/reduction.py", line 60, in dump |
|
ForkingPickler(file, protocol).dump(obj) |
|
TypeError: cannot pickle 'TLSBuffer' object |
|
[2025-08-17 18:54:31,182][03365] Unhandled exception cannot pickle 'TLSBuffer' object in evt loop Runner_EvtLoop |
|
[2025-08-17 18:54:31,183][03365] Uncaught exception in Runner evt loop |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner.py", line 770, in run |
|
evt_loop_status = self.event_loop.exec() |
|
^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 403, in exec |
|
raise exc |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 399, in exec |
|
while self._loop_iteration(): |
|
^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 383, in _loop_iteration |
|
self._process_signal(s) |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 358, in _process_signal |
|
raise exc |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal |
|
slot_callable(*args) |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start |
|
self._start_processes() |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes |
|
p.start() |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 515, in start |
|
self._process.start() |
|
File "/usr/lib/python3.11/multiprocessing/process.py", line 121, in start |
|
self._popen = self._Popen(self) |
|
^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/context.py", line 288, in _Popen |
|
return Popen(process_obj) |
|
^^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 32, in __init__ |
|
super().__init__(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 19, in __init__ |
|
self._launch(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 47, in _launch |
|
reduction.dump(process_obj, fp) |
|
File "/usr/lib/python3.11/multiprocessing/reduction.py", line 60, in dump |
|
ForkingPickler(file, protocol).dump(obj) |
|
TypeError: cannot pickle 'TLSBuffer' object |
|
[2025-08-17 18:54:31,185][03365] Runner profile tree view: |
|
main_loop: 0.0087 |
|
[2025-08-17 18:54:31,186][03365] Collected {}, FPS: 0.0 |
|
[2025-08-17 18:56:19,221][03365] Environment doom_basic already registered, overwriting... |
|
[2025-08-17 18:56:19,227][03365] Environment doom_two_colors_easy already registered, overwriting... |
|
[2025-08-17 18:56:19,230][03365] Environment doom_two_colors_hard already registered, overwriting... |
|
[2025-08-17 18:56:19,232][03365] Environment doom_dm already registered, overwriting... |
|
[2025-08-17 18:56:19,234][03365] Environment doom_dwango5 already registered, overwriting... |
|
[2025-08-17 18:56:19,236][03365] Environment doom_my_way_home_flat_actions already registered, overwriting... |
|
[2025-08-17 18:56:19,238][03365] Environment doom_defend_the_center_flat_actions already registered, overwriting... |
|
[2025-08-17 18:56:19,240][03365] Environment doom_my_way_home already registered, overwriting... |
|
[2025-08-17 18:56:19,243][03365] Environment doom_deadly_corridor already registered, overwriting... |
|
[2025-08-17 18:56:19,245][03365] Environment doom_defend_the_center already registered, overwriting... |
|
[2025-08-17 18:56:19,246][03365] Environment doom_defend_the_line already registered, overwriting... |
|
[2025-08-17 18:56:19,248][03365] Environment doom_health_gathering already registered, overwriting... |
|
[2025-08-17 18:56:19,258][03365] Environment doom_health_gathering_supreme already registered, overwriting... |
|
[2025-08-17 18:56:19,260][03365] Environment doom_battle already registered, overwriting... |
|
[2025-08-17 18:56:19,262][03365] Environment doom_battle2 already registered, overwriting... |
|
[2025-08-17 18:56:19,264][03365] Environment doom_duel_bots already registered, overwriting... |
|
[2025-08-17 18:56:19,266][03365] Environment doom_deathmatch_bots already registered, overwriting... |
|
[2025-08-17 18:56:19,268][03365] Environment doom_duel already registered, overwriting... |
|
[2025-08-17 18:56:19,270][03365] Environment doom_deathmatch_full already registered, overwriting... |
|
[2025-08-17 18:56:19,272][03365] Environment doom_benchmark already registered, overwriting... |
|
[2025-08-17 18:56:19,274][03365] register_encoder_factory: <function make_vizdoom_encoder at 0x7f3bfaf5fa60> |
|
[2025-08-17 18:56:19,371][03365] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 18:56:19,377][03365] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-17 18:56:19,379][03365] Overriding arg 'num_envs_per_worker' with value 1 passed from command line |
|
[2025-08-17 18:56:19,387][03365] Experiment dir /content/train_dir/default_experiment already exists! |
|
[2025-08-17 18:56:19,394][03365] Resuming existing experiment from /content/train_dir/default_experiment... |
|
[2025-08-17 18:56:19,398][03365] Weights and Biases integration disabled |
|
[2025-08-17 18:56:19,409][03365] Environment var CUDA_VISIBLE_DEVICES is 0 |
|
|
|
[2025-08-17 18:56:21,750][03365] cfg.num_envs_per_worker=1 must be a multiple of cfg.worker_num_splits=2 (for double-buffered sampling you need to use even number of envs per worker) |
|
[2025-08-17 18:56:35,076][03365] Environment doom_basic already registered, overwriting... |
|
[2025-08-17 18:56:35,077][03365] Environment doom_two_colors_easy already registered, overwriting... |
|
[2025-08-17 18:56:35,078][03365] Environment doom_two_colors_hard already registered, overwriting... |
|
[2025-08-17 18:56:35,078][03365] Environment doom_dm already registered, overwriting... |
|
[2025-08-17 18:56:35,079][03365] Environment doom_dwango5 already registered, overwriting... |
|
[2025-08-17 18:56:35,080][03365] Environment doom_my_way_home_flat_actions already registered, overwriting... |
|
[2025-08-17 18:56:35,081][03365] Environment doom_defend_the_center_flat_actions already registered, overwriting... |
|
[2025-08-17 18:56:35,081][03365] Environment doom_my_way_home already registered, overwriting... |
|
[2025-08-17 18:56:35,083][03365] Environment doom_deadly_corridor already registered, overwriting... |
|
[2025-08-17 18:56:35,084][03365] Environment doom_defend_the_center already registered, overwriting... |
|
[2025-08-17 18:56:35,085][03365] Environment doom_defend_the_line already registered, overwriting... |
|
[2025-08-17 18:56:35,086][03365] Environment doom_health_gathering already registered, overwriting... |
|
[2025-08-17 18:56:35,087][03365] Environment doom_health_gathering_supreme already registered, overwriting... |
|
[2025-08-17 18:56:35,088][03365] Environment doom_battle already registered, overwriting... |
|
[2025-08-17 18:56:35,089][03365] Environment doom_battle2 already registered, overwriting... |
|
[2025-08-17 18:56:35,089][03365] Environment doom_duel_bots already registered, overwriting... |
|
[2025-08-17 18:56:35,090][03365] Environment doom_deathmatch_bots already registered, overwriting... |
|
[2025-08-17 18:56:35,090][03365] Environment doom_duel already registered, overwriting... |
|
[2025-08-17 18:56:35,091][03365] Environment doom_deathmatch_full already registered, overwriting... |
|
[2025-08-17 18:56:35,092][03365] Environment doom_benchmark already registered, overwriting... |
|
[2025-08-17 18:56:35,093][03365] register_encoder_factory: <function make_vizdoom_encoder at 0x7f3bfaf5fa60> |
|
[2025-08-17 18:56:35,108][03365] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 18:56:35,112][03365] Overriding arg 'num_workers' with value 2 passed from command line |
|
[2025-08-17 18:56:35,113][03365] Overriding arg 'num_envs_per_worker' with value 2 passed from command line |
|
[2025-08-17 18:56:35,118][03365] Experiment dir /content/train_dir/default_experiment already exists! |
|
[2025-08-17 18:56:35,119][03365] Resuming existing experiment from /content/train_dir/default_experiment... |
|
[2025-08-17 18:56:35,121][03365] Weights and Biases integration disabled |
|
[2025-08-17 18:56:35,126][03365] Environment var CUDA_VISIBLE_DEVICES is 0 |
|
|
|
[2025-08-17 18:56:37,836][03365] Starting experiment with the following configuration: |
|
help=False |
|
algo=APPO |
|
env=doom_health_gathering_supreme |
|
experiment=default_experiment |
|
train_dir=/content/train_dir |
|
restart_behavior=resume |
|
device=gpu |
|
seed=None |
|
num_policies=1 |
|
async_rl=True |
|
serial_mode=False |
|
batched_sampling=False |
|
num_batches_to_accumulate=2 |
|
worker_num_splits=2 |
|
policy_workers_per_policy=1 |
|
max_policy_lag=1000 |
|
num_workers=2 |
|
num_envs_per_worker=2 |
|
batch_size=1024 |
|
num_batches_per_epoch=1 |
|
num_epochs=1 |
|
rollout=32 |
|
recurrence=32 |
|
shuffle_minibatches=False |
|
gamma=0.99 |
|
reward_scale=1.0 |
|
reward_clip=1000.0 |
|
value_bootstrap=False |
|
normalize_returns=True |
|
exploration_loss_coeff=0.001 |
|
value_loss_coeff=0.5 |
|
kl_loss_coeff=0.0 |
|
exploration_loss=symmetric_kl |
|
gae_lambda=0.95 |
|
ppo_clip_ratio=0.1 |
|
ppo_clip_value=0.2 |
|
with_vtrace=False |
|
vtrace_rho=1.0 |
|
vtrace_c=1.0 |
|
optimizer=adam |
|
adam_eps=1e-06 |
|
adam_beta1=0.9 |
|
adam_beta2=0.999 |
|
max_grad_norm=4.0 |
|
learning_rate=0.0001 |
|
lr_schedule=constant |
|
lr_schedule_kl_threshold=0.008 |
|
lr_adaptive_min=1e-06 |
|
lr_adaptive_max=0.01 |
|
obs_subtract_mean=0.0 |
|
obs_scale=255.0 |
|
normalize_input=True |
|
normalize_input_keys=None |
|
decorrelate_experience_max_seconds=0 |
|
decorrelate_envs_on_one_worker=True |
|
actor_worker_gpus=[] |
|
set_workers_cpu_affinity=True |
|
force_envs_single_thread=False |
|
default_niceness=0 |
|
log_to_file=True |
|
experiment_summaries_interval=10 |
|
flush_summaries_interval=30 |
|
stats_avg=100 |
|
summaries_use_frameskip=True |
|
heartbeat_interval=20 |
|
heartbeat_reporting_interval=600 |
|
train_for_env_steps=4000000 |
|
train_for_seconds=10000000000 |
|
save_every_sec=120 |
|
keep_checkpoints=2 |
|
load_checkpoint_kind=latest |
|
save_milestones_sec=-1 |
|
save_best_every_sec=5 |
|
save_best_metric=reward |
|
save_best_after=100000 |
|
benchmark=False |
|
encoder_mlp_layers=[512, 512] |
|
encoder_conv_architecture=convnet_simple |
|
encoder_conv_mlp_layers=[512] |
|
use_rnn=True |
|
rnn_size=512 |
|
rnn_type=gru |
|
rnn_num_layers=1 |
|
decoder_mlp_layers=[] |
|
nonlinearity=elu |
|
policy_initialization=orthogonal |
|
policy_init_gain=1.0 |
|
actor_critic_share_weights=True |
|
adaptive_stddev=True |
|
continuous_tanh_scale=0.0 |
|
initial_stddev=1.0 |
|
use_env_info_cache=False |
|
env_gpu_actions=False |
|
env_gpu_observations=True |
|
env_frameskip=4 |
|
env_framestack=1 |
|
pixel_format=CHW |
|
use_record_episode_statistics=False |
|
with_wandb=False |
|
wandb_user=None |
|
wandb_project=sample_factory |
|
wandb_group=None |
|
wandb_job_type=SF |
|
wandb_tags=[] |
|
with_pbt=False |
|
pbt_mix_policies_in_one_env=True |
|
pbt_period_env_steps=5000000 |
|
pbt_start_mutation=20000000 |
|
pbt_replace_fraction=0.3 |
|
pbt_mutation_rate=0.15 |
|
pbt_replace_reward_gap=0.1 |
|
pbt_replace_reward_gap_absolute=1e-06 |
|
pbt_optimize_gamma=False |
|
pbt_target_objective=true_objective |
|
pbt_perturb_min=1.1 |
|
pbt_perturb_max=1.5 |
|
num_agents=-1 |
|
num_humans=0 |
|
num_bots=-1 |
|
start_bot_difficulty=None |
|
timelimit=None |
|
res_w=128 |
|
res_h=72 |
|
wide_aspect_ratio=False |
|
eval_env_frameskip=1 |
|
fps=35 |
|
command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 |
|
cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} |
|
git_hash=unknown |
|
git_repo_name=not a git repository |
|
[2025-08-17 18:56:37,837][03365] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-08-17 18:56:37,839][03365] Rollout worker 0 uses device cpu |
|
[2025-08-17 18:56:37,841][03365] Rollout worker 1 uses device cpu |
|
[2025-08-17 18:56:37,924][03365] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:56:37,926][03365] InferenceWorker_p0-w0: min num requests: 1 |
|
[2025-08-17 18:56:37,936][03365] Starting all processes... |
|
[2025-08-17 18:56:37,937][03365] Starting process learner_proc0 |
|
[2025-08-17 18:56:37,940][03365] EvtLoop [Runner_EvtLoop, process=main process 3365] unhandled exception in slot='_on_start' connected to emitter=Emitter(object_id='Runner_EvtLoop', signal_name='start'), args=() |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal |
|
slot_callable(*args) |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start |
|
self._start_processes() |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes |
|
p.start() |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 515, in start |
|
self._process.start() |
|
File "/usr/lib/python3.11/multiprocessing/process.py", line 121, in start |
|
self._popen = self._Popen(self) |
|
^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/context.py", line 288, in _Popen |
|
return Popen(process_obj) |
|
^^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 32, in __init__ |
|
super().__init__(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 19, in __init__ |
|
self._launch(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 47, in _launch |
|
reduction.dump(process_obj, fp) |
|
File "/usr/lib/python3.11/multiprocessing/reduction.py", line 60, in dump |
|
ForkingPickler(file, protocol).dump(obj) |
|
TypeError: cannot pickle 'TLSBuffer' object |
|
[2025-08-17 18:56:37,942][03365] Unhandled exception cannot pickle 'TLSBuffer' object in evt loop Runner_EvtLoop |
|
[2025-08-17 18:56:37,945][03365] Uncaught exception in Runner evt loop |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner.py", line 770, in run |
|
evt_loop_status = self.event_loop.exec() |
|
^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 403, in exec |
|
raise exc |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 399, in exec |
|
while self._loop_iteration(): |
|
^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 383, in _loop_iteration |
|
self._process_signal(s) |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 358, in _process_signal |
|
raise exc |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal |
|
slot_callable(*args) |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 49, in _on_start |
|
self._start_processes() |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 56, in _start_processes |
|
p.start() |
|
File "/usr/local/lib/python3.11/dist-packages/signal_slot/signal_slot.py", line 515, in start |
|
self._process.start() |
|
File "/usr/lib/python3.11/multiprocessing/process.py", line 121, in start |
|
self._popen = self._Popen(self) |
|
^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/context.py", line 288, in _Popen |
|
return Popen(process_obj) |
|
^^^^^^^^^^^^^^^^^^ |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 32, in __init__ |
|
super().__init__(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 19, in __init__ |
|
self._launch(process_obj) |
|
File "/usr/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 47, in _launch |
|
reduction.dump(process_obj, fp) |
|
File "/usr/lib/python3.11/multiprocessing/reduction.py", line 60, in dump |
|
ForkingPickler(file, protocol).dump(obj) |
|
TypeError: cannot pickle 'TLSBuffer' object |
|
[2025-08-17 18:56:37,948][03365] Runner profile tree view: |
|
main_loop: 0.0117 |
|
[2025-08-17 18:56:37,949][03365] Collected {}, FPS: 0.0 |
|
[2025-08-17 18:58:01,897][04426] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-08-17 18:58:01,901][04426] Rollout worker 0 uses device cpu |
|
[2025-08-17 18:58:01,902][04426] Rollout worker 1 uses device cpu |
|
[2025-08-17 18:58:02,052][04426] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:58:02,056][04426] InferenceWorker_p0-w0: min num requests: 1 |
|
[2025-08-17 18:58:02,069][04426] Starting all processes... |
|
[2025-08-17 18:58:02,072][04426] Starting process learner_proc0 |
|
[2025-08-17 18:58:02,153][04426] Starting all processes... |
|
[2025-08-17 18:58:02,201][04426] Starting process inference_proc0-0 |
|
[2025-08-17 18:58:02,202][04426] Starting process rollout_proc0 |
|
[2025-08-17 18:58:02,202][04426] Starting process rollout_proc1 |
|
[2025-08-17 18:58:09,675][04883] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:58:09,676][04883] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-08-17 18:58:09,701][04883] Num visible devices: 1 |
|
[2025-08-17 18:58:09,813][04871] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:58:09,813][04871] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-08-17 18:58:09,843][04871] Num visible devices: 1 |
|
[2025-08-17 18:58:09,850][04871] Starting seed is not provided |
|
[2025-08-17 18:58:09,850][04871] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:58:09,851][04871] Initializing actor-critic model on device cuda:0 |
|
[2025-08-17 18:58:09,853][04871] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 18:58:09,857][04871] RunningMeanStd input shape: (1,) |
|
[2025-08-17 18:58:09,888][04871] ConvEncoder: input_channels=3 |
|
[2025-08-17 18:58:10,087][04882] Worker 1 uses CPU cores [1] |
|
[2025-08-17 18:58:10,110][04884] Worker 0 uses CPU cores [0] |
|
[2025-08-17 18:58:10,241][04871] Conv encoder output size: 512 |
|
[2025-08-17 18:58:10,242][04871] Policy head output size: 512 |
|
[2025-08-17 18:58:10,304][04871] Created Actor Critic model with architecture: |
|
[2025-08-17 18:58:10,304][04871] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-08-17 18:58:10,630][04871] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-08-17 18:58:15,569][04426] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 4426], exiting... |
|
[2025-08-17 18:58:15,571][04882] Stopping RolloutWorker_w1... |
|
[2025-08-17 18:58:15,571][04882] Loop rollout_proc1_evt_loop terminating... |
|
[2025-08-17 18:58:15,575][04426] Runner profile tree view: |
|
main_loop: 13.5049 |
|
[2025-08-17 18:58:15,576][04883] Stopping InferenceWorker_p0-w0... |
|
[2025-08-17 18:58:15,577][04426] Collected {}, FPS: 0.0 |
|
[2025-08-17 18:58:15,578][04883] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-08-17 18:58:15,583][04871] Stopping Batcher_0... |
|
[2025-08-17 18:58:15,584][04871] Loop batcher_evt_loop terminating... |
|
[2025-08-17 18:58:15,574][04884] Stopping RolloutWorker_w0... |
|
[2025-08-17 18:58:15,588][04884] Loop rollout_proc0_evt_loop terminating... |
|
[2025-08-17 18:58:16,873][04426] Environment doom_basic already registered, overwriting... |
|
[2025-08-17 18:58:16,876][04426] Environment doom_two_colors_easy already registered, overwriting... |
|
[2025-08-17 18:58:16,880][04426] Environment doom_two_colors_hard already registered, overwriting... |
|
[2025-08-17 18:58:16,882][04426] Environment doom_dm already registered, overwriting... |
|
[2025-08-17 18:58:16,882][04426] Environment doom_dwango5 already registered, overwriting... |
|
[2025-08-17 18:58:16,884][04426] Environment doom_my_way_home_flat_actions already registered, overwriting... |
|
[2025-08-17 18:58:16,886][04426] Environment doom_defend_the_center_flat_actions already registered, overwriting... |
|
[2025-08-17 18:58:16,887][04426] Environment doom_my_way_home already registered, overwriting... |
|
[2025-08-17 18:58:16,889][04426] Environment doom_deadly_corridor already registered, overwriting... |
|
[2025-08-17 18:58:16,890][04426] Environment doom_defend_the_center already registered, overwriting... |
|
[2025-08-17 18:58:16,890][04426] Environment doom_defend_the_line already registered, overwriting... |
|
[2025-08-17 18:58:16,892][04426] Environment doom_health_gathering already registered, overwriting... |
|
[2025-08-17 18:58:16,893][04426] Environment doom_health_gathering_supreme already registered, overwriting... |
|
[2025-08-17 18:58:16,894][04426] Environment doom_battle already registered, overwriting... |
|
[2025-08-17 18:58:16,895][04426] Environment doom_battle2 already registered, overwriting... |
|
[2025-08-17 18:58:16,896][04426] Environment doom_duel_bots already registered, overwriting... |
|
[2025-08-17 18:58:16,897][04426] Environment doom_deathmatch_bots already registered, overwriting... |
|
[2025-08-17 18:58:16,898][04426] Environment doom_duel already registered, overwriting... |
|
[2025-08-17 18:58:16,899][04426] Environment doom_deathmatch_full already registered, overwriting... |
|
[2025-08-17 18:58:16,900][04426] Environment doom_benchmark already registered, overwriting... |
|
[2025-08-17 18:58:16,901][04426] register_encoder_factory: <function make_vizdoom_encoder at 0x7bc6a644c680> |
|
[2025-08-17 18:58:16,929][04426] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 18:58:16,931][04426] Overriding arg 'num_workers' with value 8 passed from command line |
|
[2025-08-17 18:58:16,932][04426] Overriding arg 'num_envs_per_worker' with value 4 passed from command line |
|
[2025-08-17 18:58:16,940][04426] Experiment dir /content/train_dir/default_experiment already exists! |
|
[2025-08-17 18:58:16,941][04426] Resuming existing experiment from /content/train_dir/default_experiment... |
|
[2025-08-17 18:58:16,944][04426] Weights and Biases integration disabled |
|
[2025-08-17 18:58:16,955][04426] Environment var CUDA_VISIBLE_DEVICES is 0 |
|
|
|
[2025-08-17 18:58:17,999][04871] No checkpoints found |
|
[2025-08-17 18:58:18,000][04871] Did not load from checkpoint, starting from scratch! |
|
[2025-08-17 18:58:18,000][04871] Initialized policy 0 weights for model version 0 |
|
[2025-08-17 18:58:18,003][04871] LearnerWorker_p0 finished initialization! |
|
[2025-08-17 18:58:18,004][04871] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth... |
|
[2025-08-17 18:58:18,034][04871] Stopping LearnerWorker_p0... |
|
[2025-08-17 18:58:18,035][04871] Loop learner_proc0_evt_loop terminating... |
|
[2025-08-17 18:58:19,915][04426] Starting experiment with the following configuration: |
|
help=False |
|
algo=APPO |
|
env=doom_health_gathering_supreme |
|
experiment=default_experiment |
|
train_dir=/content/train_dir |
|
restart_behavior=resume |
|
device=gpu |
|
seed=None |
|
num_policies=1 |
|
async_rl=True |
|
serial_mode=False |
|
batched_sampling=False |
|
num_batches_to_accumulate=2 |
|
worker_num_splits=2 |
|
policy_workers_per_policy=1 |
|
max_policy_lag=1000 |
|
num_workers=8 |
|
num_envs_per_worker=4 |
|
batch_size=1024 |
|
num_batches_per_epoch=1 |
|
num_epochs=1 |
|
rollout=32 |
|
recurrence=32 |
|
shuffle_minibatches=False |
|
gamma=0.99 |
|
reward_scale=1.0 |
|
reward_clip=1000.0 |
|
value_bootstrap=False |
|
normalize_returns=True |
|
exploration_loss_coeff=0.001 |
|
value_loss_coeff=0.5 |
|
kl_loss_coeff=0.0 |
|
exploration_loss=symmetric_kl |
|
gae_lambda=0.95 |
|
ppo_clip_ratio=0.1 |
|
ppo_clip_value=0.2 |
|
with_vtrace=False |
|
vtrace_rho=1.0 |
|
vtrace_c=1.0 |
|
optimizer=adam |
|
adam_eps=1e-06 |
|
adam_beta1=0.9 |
|
adam_beta2=0.999 |
|
max_grad_norm=4.0 |
|
learning_rate=0.0001 |
|
lr_schedule=constant |
|
lr_schedule_kl_threshold=0.008 |
|
lr_adaptive_min=1e-06 |
|
lr_adaptive_max=0.01 |
|
obs_subtract_mean=0.0 |
|
obs_scale=255.0 |
|
normalize_input=True |
|
normalize_input_keys=None |
|
decorrelate_experience_max_seconds=0 |
|
decorrelate_envs_on_one_worker=True |
|
actor_worker_gpus=[] |
|
set_workers_cpu_affinity=True |
|
force_envs_single_thread=False |
|
default_niceness=0 |
|
log_to_file=True |
|
experiment_summaries_interval=10 |
|
flush_summaries_interval=30 |
|
stats_avg=100 |
|
summaries_use_frameskip=True |
|
heartbeat_interval=20 |
|
heartbeat_reporting_interval=600 |
|
train_for_env_steps=4000000 |
|
train_for_seconds=10000000000 |
|
save_every_sec=120 |
|
keep_checkpoints=2 |
|
load_checkpoint_kind=latest |
|
save_milestones_sec=-1 |
|
save_best_every_sec=5 |
|
save_best_metric=reward |
|
save_best_after=100000 |
|
benchmark=False |
|
encoder_mlp_layers=[512, 512] |
|
encoder_conv_architecture=convnet_simple |
|
encoder_conv_mlp_layers=[512] |
|
use_rnn=True |
|
rnn_size=512 |
|
rnn_type=gru |
|
rnn_num_layers=1 |
|
decoder_mlp_layers=[] |
|
nonlinearity=elu |
|
policy_initialization=orthogonal |
|
policy_init_gain=1.0 |
|
actor_critic_share_weights=True |
|
adaptive_stddev=True |
|
continuous_tanh_scale=0.0 |
|
initial_stddev=1.0 |
|
use_env_info_cache=False |
|
env_gpu_actions=False |
|
env_gpu_observations=True |
|
env_frameskip=4 |
|
env_framestack=1 |
|
pixel_format=CHW |
|
use_record_episode_statistics=False |
|
with_wandb=False |
|
wandb_user=None |
|
wandb_project=sample_factory |
|
wandb_group=None |
|
wandb_job_type=SF |
|
wandb_tags=[] |
|
with_pbt=False |
|
pbt_mix_policies_in_one_env=True |
|
pbt_period_env_steps=5000000 |
|
pbt_start_mutation=20000000 |
|
pbt_replace_fraction=0.3 |
|
pbt_mutation_rate=0.15 |
|
pbt_replace_reward_gap=0.1 |
|
pbt_replace_reward_gap_absolute=1e-06 |
|
pbt_optimize_gamma=False |
|
pbt_target_objective=true_objective |
|
pbt_perturb_min=1.1 |
|
pbt_perturb_max=1.5 |
|
num_agents=-1 |
|
num_humans=0 |
|
num_bots=-1 |
|
start_bot_difficulty=None |
|
timelimit=None |
|
res_w=128 |
|
res_h=72 |
|
wide_aspect_ratio=False |
|
eval_env_frameskip=1 |
|
fps=35 |
|
command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 |
|
cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} |
|
git_hash=unknown |
|
git_repo_name=not a git repository |
|
[2025-08-17 18:58:19,916][04426] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-08-17 18:58:19,918][04426] Rollout worker 0 uses device cpu |
|
[2025-08-17 18:58:19,919][04426] Rollout worker 1 uses device cpu |
|
[2025-08-17 18:58:19,920][04426] Rollout worker 2 uses device cpu |
|
[2025-08-17 18:58:19,921][04426] Rollout worker 3 uses device cpu |
|
[2025-08-17 18:58:19,921][04426] Rollout worker 4 uses device cpu |
|
[2025-08-17 18:58:19,923][04426] Rollout worker 5 uses device cpu |
|
[2025-08-17 18:58:19,924][04426] Rollout worker 6 uses device cpu |
|
[2025-08-17 18:58:19,925][04426] Rollout worker 7 uses device cpu |
|
[2025-08-17 18:58:20,028][04426] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:58:20,029][04426] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-08-17 18:58:20,161][04426] Starting all processes... |
|
[2025-08-17 18:58:20,162][04426] Starting process learner_proc0 |
|
[2025-08-17 18:58:20,217][04426] Starting all processes... |
|
[2025-08-17 18:58:20,222][04426] Starting process inference_proc0-0 |
|
[2025-08-17 18:58:20,222][04426] Starting process rollout_proc0 |
|
[2025-08-17 18:58:20,223][04426] Starting process rollout_proc1 |
|
[2025-08-17 18:58:20,238][04426] Starting process rollout_proc2 |
|
[2025-08-17 18:58:20,239][04426] Starting process rollout_proc3 |
|
[2025-08-17 18:58:20,239][04426] Starting process rollout_proc4 |
|
[2025-08-17 18:58:20,239][04426] Starting process rollout_proc5 |
|
[2025-08-17 18:58:20,239][04426] Starting process rollout_proc6 |
|
[2025-08-17 18:58:20,239][04426] Starting process rollout_proc7 |
|
[2025-08-17 18:58:35,727][04970] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:58:35,728][04970] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-08-17 18:58:35,796][04970] Num visible devices: 1 |
|
[2025-08-17 18:58:35,809][04970] Starting seed is not provided |
|
[2025-08-17 18:58:35,810][04970] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:58:35,811][04970] Initializing actor-critic model on device cuda:0 |
|
[2025-08-17 18:58:35,812][04970] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 18:58:35,814][04970] RunningMeanStd input shape: (1,) |
|
[2025-08-17 18:58:35,936][04970] ConvEncoder: input_channels=3 |
|
[2025-08-17 18:58:36,283][04989] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:58:36,283][04989] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-08-17 18:58:36,466][04989] Num visible devices: 1 |
|
[2025-08-17 18:58:36,558][04992] Worker 5 uses CPU cores [1] |
|
[2025-08-17 18:58:37,019][04987] Worker 0 uses CPU cores [0] |
|
[2025-08-17 18:58:37,178][04970] Conv encoder output size: 512 |
|
[2025-08-17 18:58:37,179][04970] Policy head output size: 512 |
|
[2025-08-17 18:58:37,238][04991] Worker 3 uses CPU cores [1] |
|
[2025-08-17 18:58:37,345][04970] Created Actor Critic model with architecture: |
|
[2025-08-17 18:58:37,346][04970] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-08-17 18:58:37,388][04988] Worker 1 uses CPU cores [1] |
|
[2025-08-17 18:58:37,395][04994] Worker 6 uses CPU cores [0] |
|
[2025-08-17 18:58:37,444][04995] Worker 7 uses CPU cores [1] |
|
[2025-08-17 18:58:37,467][04993] Worker 2 uses CPU cores [0] |
|
[2025-08-17 18:58:37,504][04990] Worker 4 uses CPU cores [0] |
|
[2025-08-17 18:58:37,656][04970] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-08-17 18:58:38,953][04970] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth... |
|
[2025-08-17 18:58:38,968][04970] Loading model from checkpoint |
|
[2025-08-17 18:58:38,970][04970] Loaded experiment state at self.train_step=0, self.env_steps=0 |
|
[2025-08-17 18:58:38,970][04970] Initialized policy 0 weights for model version 0 |
|
[2025-08-17 18:58:38,972][04970] LearnerWorker_p0 finished initialization! |
|
[2025-08-17 18:58:38,974][04970] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-17 18:58:39,090][04989] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 18:58:39,092][04989] RunningMeanStd input shape: (1,) |
|
[2025-08-17 18:58:39,103][04989] ConvEncoder: input_channels=3 |
|
[2025-08-17 18:58:39,202][04989] Conv encoder output size: 512 |
|
[2025-08-17 18:58:39,202][04989] Policy head output size: 512 |
|
[2025-08-17 18:58:39,246][04426] Inference worker 0-0 is ready! |
|
[2025-08-17 18:58:39,248][04426] All inference workers are ready! Signal rollout workers to start! |
|
[2025-08-17 18:58:39,517][04992] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-17 18:58:39,520][04993] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-17 18:58:39,537][04988] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-17 18:58:39,541][04995] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-17 18:58:39,557][04987] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-17 18:58:39,557][04994] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-17 18:58:39,578][04990] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-17 18:58:39,588][04991] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-17 18:58:40,020][04426] Heartbeat connected on Batcher_0 |
|
[2025-08-17 18:58:40,023][04426] Heartbeat connected on LearnerWorker_p0 |
|
[2025-08-17 18:58:40,059][04426] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-08-17 18:58:40,685][04992] Decorrelating experience for 0 frames... |
|
[2025-08-17 18:58:40,815][04994] Decorrelating experience for 0 frames... |
|
[2025-08-17 18:58:40,816][04990] Decorrelating experience for 0 frames... |
|
[2025-08-17 18:58:41,054][04992] Decorrelating experience for 32 frames... |
|
[2025-08-17 18:58:41,071][04987] Decorrelating experience for 0 frames... |
|
[2025-08-17 18:58:41,788][04990] Decorrelating experience for 32 frames... |
|
[2025-08-17 18:58:41,873][04995] Decorrelating experience for 0 frames... |
|
[2025-08-17 18:58:41,956][04426] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-08-17 18:58:42,035][04992] Decorrelating experience for 64 frames... |
|
[2025-08-17 18:58:42,258][04987] Decorrelating experience for 32 frames... |
|
[2025-08-17 18:58:42,260][04994] Decorrelating experience for 32 frames... |
|
[2025-08-17 18:58:42,851][04995] Decorrelating experience for 32 frames... |
|
[2025-08-17 18:58:43,310][04992] Decorrelating experience for 96 frames... |
|
[2025-08-17 18:58:43,478][04990] Decorrelating experience for 64 frames... |
|
[2025-08-17 18:58:43,645][04426] Heartbeat connected on RolloutWorker_w5 |
|
[2025-08-17 18:58:44,051][04995] Decorrelating experience for 64 frames... |
|
[2025-08-17 18:58:44,144][04987] Decorrelating experience for 64 frames... |
|
[2025-08-17 18:58:44,789][04995] Decorrelating experience for 96 frames... |
|
[2025-08-17 18:58:44,851][04994] Decorrelating experience for 64 frames... |
|
[2025-08-17 18:58:45,020][04426] Heartbeat connected on RolloutWorker_w7 |
|
[2025-08-17 18:58:45,075][04990] Decorrelating experience for 96 frames... |
|
[2025-08-17 18:58:45,253][04426] Heartbeat connected on RolloutWorker_w4 |
|
[2025-08-17 18:58:45,513][04987] Decorrelating experience for 96 frames... |
|
[2025-08-17 18:58:45,738][04426] Heartbeat connected on RolloutWorker_w0 |
|
[2025-08-17 18:58:46,258][04994] Decorrelating experience for 96 frames... |
|
[2025-08-17 18:58:46,413][04426] Heartbeat connected on RolloutWorker_w6 |
|
[2025-08-17 18:58:46,956][04426] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 41.6. Samples: 208. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-08-17 18:58:46,959][04426] Avg episode reward: [(0, '2.017')] |
|
[2025-08-17 18:58:48,195][04970] Signal inference workers to stop experience collection... |
|
[2025-08-17 18:58:48,214][04989] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-08-17 18:58:49,936][04970] Signal inference workers to resume experience collection... |
|
[2025-08-17 18:58:49,940][04989] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-08-17 18:58:51,956][04426] Fps is (10 sec: 1228.7, 60 sec: 1228.7, 300 sec: 1228.7). Total num frames: 12288. Throughput: 0: 309.6. Samples: 3096. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2025-08-17 18:58:51,964][04426] Avg episode reward: [(0, '3.543')] |
|
[2025-08-17 18:58:56,957][04426] Fps is (10 sec: 3276.4, 60 sec: 2184.3, 300 sec: 2184.3). Total num frames: 32768. Throughput: 0: 404.4. Samples: 6066. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 18:58:56,962][04426] Avg episode reward: [(0, '4.044')] |
|
[2025-08-17 18:58:58,832][04989] Updated weights for policy 0, policy_version 10 (0.0023) |
|
[2025-08-17 18:59:01,956][04426] Fps is (10 sec: 4096.2, 60 sec: 2662.4, 300 sec: 2662.4). Total num frames: 53248. Throughput: 0: 606.2. Samples: 12124. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 18:59:01,957][04426] Avg episode reward: [(0, '4.391')] |
|
[2025-08-17 18:59:06,956][04426] Fps is (10 sec: 3277.3, 60 sec: 2621.4, 300 sec: 2621.4). Total num frames: 65536. Throughput: 0: 678.0. Samples: 16950. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 18:59:06,957][04426] Avg episode reward: [(0, '4.317')] |
|
[2025-08-17 18:59:09,774][04989] Updated weights for policy 0, policy_version 20 (0.0012) |
|
[2025-08-17 18:59:11,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3003.7, 300 sec: 3003.7). Total num frames: 90112. Throughput: 0: 675.0. Samples: 20250. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 18:59:11,959][04426] Avg episode reward: [(0, '4.231')] |
|
[2025-08-17 18:59:16,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3042.7, 300 sec: 3042.7). Total num frames: 106496. Throughput: 0: 756.0. Samples: 26460. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 18:59:16,961][04426] Avg episode reward: [(0, '4.351')] |
|
[2025-08-17 18:59:16,991][04970] Saving new best policy, reward=4.351! |
|
[2025-08-17 18:59:20,951][04989] Updated weights for policy 0, policy_version 30 (0.0013) |
|
[2025-08-17 18:59:21,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3174.4, 300 sec: 3174.4). Total num frames: 126976. Throughput: 0: 782.0. Samples: 31278. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 18:59:21,960][04426] Avg episode reward: [(0, '4.481')] |
|
[2025-08-17 18:59:21,963][04970] Saving new best policy, reward=4.481! |
|
[2025-08-17 18:59:26,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 147456. Throughput: 0: 766.3. Samples: 34482. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 18:59:26,957][04426] Avg episode reward: [(0, '4.491')] |
|
[2025-08-17 18:59:26,962][04970] Saving new best policy, reward=4.491! |
|
[2025-08-17 18:59:30,675][04989] Updated weights for policy 0, policy_version 40 (0.0016) |
|
[2025-08-17 18:59:31,957][04426] Fps is (10 sec: 3686.0, 60 sec: 3276.7, 300 sec: 3276.7). Total num frames: 163840. Throughput: 0: 904.6. Samples: 40914. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 18:59:31,958][04426] Avg episode reward: [(0, '4.503')] |
|
[2025-08-17 18:59:31,963][04970] Saving new best policy, reward=4.503! |
|
[2025-08-17 18:59:36,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3351.3, 300 sec: 3351.3). Total num frames: 184320. Throughput: 0: 950.3. Samples: 45860. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 18:59:36,960][04426] Avg episode reward: [(0, '4.549')] |
|
[2025-08-17 18:59:36,965][04970] Saving new best policy, reward=4.549! |
|
[2025-08-17 18:59:41,958][04426] Fps is (10 sec: 3686.0, 60 sec: 3345.0, 300 sec: 3345.0). Total num frames: 200704. Throughput: 0: 940.2. Samples: 48376. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 18:59:41,962][04426] Avg episode reward: [(0, '4.440')] |
|
[2025-08-17 18:59:42,219][04989] Updated weights for policy 0, policy_version 50 (0.0016) |
|
[2025-08-17 18:59:46,956][04426] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3339.8). Total num frames: 217088. Throughput: 0: 937.1. Samples: 54292. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 18:59:46,957][04426] Avg episode reward: [(0, '4.427')] |
|
[2025-08-17 18:59:51,956][04426] Fps is (10 sec: 3687.2, 60 sec: 3754.7, 300 sec: 3393.8). Total num frames: 237568. Throughput: 0: 945.2. Samples: 59482. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 18:59:51,957][04426] Avg episode reward: [(0, '4.308')] |
|
[2025-08-17 18:59:53,423][04989] Updated weights for policy 0, policy_version 60 (0.0017) |
|
[2025-08-17 18:59:56,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3754.8, 300 sec: 3440.6). Total num frames: 258048. Throughput: 0: 944.3. Samples: 62742. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 18:59:56,959][04426] Avg episode reward: [(0, '4.331')] |
|
[2025-08-17 19:00:01,956][04426] Fps is (10 sec: 3686.2, 60 sec: 3686.4, 300 sec: 3430.4). Total num frames: 274432. Throughput: 0: 933.1. Samples: 68452. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:00:01,961][04426] Avg episode reward: [(0, '4.356')] |
|
[2025-08-17 19:00:04,336][04989] Updated weights for policy 0, policy_version 70 (0.0018) |
|
[2025-08-17 19:00:06,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3469.6). Total num frames: 294912. Throughput: 0: 951.5. Samples: 74094. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:00:06,960][04426] Avg episode reward: [(0, '4.611')] |
|
[2025-08-17 19:00:06,965][04970] Saving new best policy, reward=4.611! |
|
[2025-08-17 19:00:11,956][04426] Fps is (10 sec: 4096.1, 60 sec: 3754.6, 300 sec: 3504.3). Total num frames: 315392. Throughput: 0: 951.4. Samples: 77296. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:00:11,960][04426] Avg episode reward: [(0, '4.725')] |
|
[2025-08-17 19:00:11,995][04970] Saving new best policy, reward=4.725! |
|
[2025-08-17 19:00:14,884][04989] Updated weights for policy 0, policy_version 80 (0.0023) |
|
[2025-08-17 19:00:16,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3492.4). Total num frames: 331776. Throughput: 0: 924.2. Samples: 82500. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:00:16,957][04426] Avg episode reward: [(0, '4.539')] |
|
[2025-08-17 19:00:16,965][04970] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000081_331776.pth... |
|
[2025-08-17 19:00:21,956][04426] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3522.6). Total num frames: 352256. Throughput: 0: 941.0. Samples: 88204. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:00:21,957][04426] Avg episode reward: [(0, '4.307')] |
|
[2025-08-17 19:00:25,359][04989] Updated weights for policy 0, policy_version 90 (0.0022) |
|
[2025-08-17 19:00:26,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3549.9). Total num frames: 372736. Throughput: 0: 955.2. Samples: 91358. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:00:26,958][04426] Avg episode reward: [(0, '4.526')] |
|
[2025-08-17 19:00:31,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3537.5). Total num frames: 389120. Throughput: 0: 928.8. Samples: 96090. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:00:31,961][04426] Avg episode reward: [(0, '4.618')] |
|
[2025-08-17 19:00:36,690][04989] Updated weights for policy 0, policy_version 100 (0.0016) |
|
[2025-08-17 19:00:36,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3561.7). Total num frames: 409600. Throughput: 0: 952.9. Samples: 102362. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:00:36,957][04426] Avg episode reward: [(0, '4.637')] |
|
[2025-08-17 19:00:41,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.8, 300 sec: 3549.9). Total num frames: 425984. Throughput: 0: 950.3. Samples: 105506. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:00:41,957][04426] Avg episode reward: [(0, '4.725')] |
|
[2025-08-17 19:00:46,956][04426] Fps is (10 sec: 3276.7, 60 sec: 3754.7, 300 sec: 3538.9). Total num frames: 442368. Throughput: 0: 924.9. Samples: 110070. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:00:46,957][04426] Avg episode reward: [(0, '4.790')] |
|
[2025-08-17 19:00:46,963][04970] Saving new best policy, reward=4.790! |
|
[2025-08-17 19:00:48,143][04989] Updated weights for policy 0, policy_version 110 (0.0017) |
|
[2025-08-17 19:00:51,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3560.4). Total num frames: 462848. Throughput: 0: 934.9. Samples: 116166. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:00:51,957][04426] Avg episode reward: [(0, '4.699')] |
|
[2025-08-17 19:00:56,956][04426] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3580.2). Total num frames: 483328. Throughput: 0: 935.3. Samples: 119386. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:00:56,959][04426] Avg episode reward: [(0, '4.607')] |
|
[2025-08-17 19:00:59,275][04989] Updated weights for policy 0, policy_version 120 (0.0013) |
|
[2025-08-17 19:01:01,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3569.4). Total num frames: 499712. Throughput: 0: 925.9. Samples: 124166. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:01:01,957][04426] Avg episode reward: [(0, '4.399')] |
|
[2025-08-17 19:01:06,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3615.8). Total num frames: 524288. Throughput: 0: 943.2. Samples: 130648. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:01:06,957][04426] Avg episode reward: [(0, '4.436')] |
|
[2025-08-17 19:01:08,906][04989] Updated weights for policy 0, policy_version 130 (0.0016) |
|
[2025-08-17 19:01:11,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3604.5). Total num frames: 540672. Throughput: 0: 941.6. Samples: 133730. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:01:11,960][04426] Avg episode reward: [(0, '4.491')] |
|
[2025-08-17 19:01:16,956][04426] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3593.9). Total num frames: 557056. Throughput: 0: 945.0. Samples: 138614. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:01:16,958][04426] Avg episode reward: [(0, '4.412')] |
|
[2025-08-17 19:01:20,119][04989] Updated weights for policy 0, policy_version 140 (0.0020) |
|
[2025-08-17 19:01:21,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3609.6). Total num frames: 577536. Throughput: 0: 946.2. Samples: 144942. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:01:21,957][04426] Avg episode reward: [(0, '4.355')] |
|
[2025-08-17 19:01:26,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3599.5). Total num frames: 593920. Throughput: 0: 937.2. Samples: 147682. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:01:26,962][04426] Avg episode reward: [(0, '4.278')] |
|
[2025-08-17 19:01:31,353][04989] Updated weights for policy 0, policy_version 150 (0.0014) |
|
[2025-08-17 19:01:31,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3614.1). Total num frames: 614400. Throughput: 0: 953.6. Samples: 152984. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:01:31,960][04426] Avg episode reward: [(0, '4.464')] |
|
[2025-08-17 19:01:36,957][04426] Fps is (10 sec: 4095.6, 60 sec: 3754.6, 300 sec: 3627.9). Total num frames: 634880. Throughput: 0: 959.3. Samples: 159334. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:01:36,961][04426] Avg episode reward: [(0, '4.686')] |
|
[2025-08-17 19:01:41,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3618.1). Total num frames: 651264. Throughput: 0: 938.6. Samples: 161624. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-17 19:01:41,957][04426] Avg episode reward: [(0, '4.789')] |
|
[2025-08-17 19:01:42,438][04989] Updated weights for policy 0, policy_version 160 (0.0019) |
|
[2025-08-17 19:01:46,956][04426] Fps is (10 sec: 3686.8, 60 sec: 3822.9, 300 sec: 3631.0). Total num frames: 671744. Throughput: 0: 959.2. Samples: 167328. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:01:46,959][04426] Avg episode reward: [(0, '4.627')] |
|
[2025-08-17 19:01:51,956][04426] Fps is (10 sec: 4095.8, 60 sec: 3822.9, 300 sec: 3643.3). Total num frames: 692224. Throughput: 0: 955.7. Samples: 173654. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-17 19:01:51,960][04426] Avg episode reward: [(0, '4.399')] |
|
[2025-08-17 19:01:52,481][04989] Updated weights for policy 0, policy_version 170 (0.0012) |
|
[2025-08-17 19:01:56,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3633.9). Total num frames: 708608. Throughput: 0: 933.0. Samples: 175716. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:01:56,960][04426] Avg episode reward: [(0, '4.726')] |
|
[2025-08-17 19:02:01,956][04426] Fps is (10 sec: 4096.2, 60 sec: 3891.2, 300 sec: 3665.9). Total num frames: 733184. Throughput: 0: 961.5. Samples: 181882. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:02:01,960][04426] Avg episode reward: [(0, '4.913')] |
|
[2025-08-17 19:02:01,964][04970] Saving new best policy, reward=4.913! |
|
[2025-08-17 19:02:02,861][04989] Updated weights for policy 0, policy_version 180 (0.0013) |
|
[2025-08-17 19:02:06,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3656.4). Total num frames: 749568. Throughput: 0: 954.5. Samples: 187896. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:02:06,960][04426] Avg episode reward: [(0, '4.699')] |
|
[2025-08-17 19:02:11,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3666.9). Total num frames: 770048. Throughput: 0: 938.8. Samples: 189930. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:02:11,963][04426] Avg episode reward: [(0, '4.641')] |
|
[2025-08-17 19:02:13,756][04989] Updated weights for policy 0, policy_version 190 (0.0017) |
|
[2025-08-17 19:02:16,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3676.9). Total num frames: 790528. Throughput: 0: 968.4. Samples: 196560. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:02:16,961][04426] Avg episode reward: [(0, '4.694')] |
|
[2025-08-17 19:02:16,969][04970] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000193_790528.pth... |
|
[2025-08-17 19:02:17,056][04970] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth |
|
[2025-08-17 19:02:21,956][04426] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3667.8). Total num frames: 806912. Throughput: 0: 952.1. Samples: 202176. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:02:21,958][04426] Avg episode reward: [(0, '4.875')] |
|
[2025-08-17 19:02:24,739][04989] Updated weights for policy 0, policy_version 200 (0.0017) |
|
[2025-08-17 19:02:26,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3677.3). Total num frames: 827392. Throughput: 0: 954.6. Samples: 204582. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:02:26,959][04426] Avg episode reward: [(0, '4.875')] |
|
[2025-08-17 19:02:31,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3686.4). Total num frames: 847872. Throughput: 0: 975.8. Samples: 211240. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:02:31,958][04426] Avg episode reward: [(0, '4.772')] |
|
[2025-08-17 19:02:34,568][04989] Updated weights for policy 0, policy_version 210 (0.0015) |
|
[2025-08-17 19:02:36,959][04426] Fps is (10 sec: 3685.0, 60 sec: 3822.8, 300 sec: 3677.6). Total num frames: 864256. Throughput: 0: 950.7. Samples: 216440. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:02:36,961][04426] Avg episode reward: [(0, '4.676')] |
|
[2025-08-17 19:02:41,956][04426] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3686.4). Total num frames: 884736. Throughput: 0: 966.4. Samples: 219204. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:02:41,959][04426] Avg episode reward: [(0, '5.014')] |
|
[2025-08-17 19:02:41,964][04970] Saving new best policy, reward=5.014! |
|
[2025-08-17 19:02:45,284][04989] Updated weights for policy 0, policy_version 220 (0.0016) |
|
[2025-08-17 19:02:46,956][04426] Fps is (10 sec: 4097.5, 60 sec: 3891.2, 300 sec: 3694.8). Total num frames: 905216. Throughput: 0: 972.1. Samples: 225626. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:02:46,960][04426] Avg episode reward: [(0, '5.321')] |
|
[2025-08-17 19:02:46,968][04970] Saving new best policy, reward=5.321! |
|
[2025-08-17 19:02:51,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3686.4). Total num frames: 921600. Throughput: 0: 946.5. Samples: 230490. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:02:51,957][04426] Avg episode reward: [(0, '5.382')] |
|
[2025-08-17 19:02:51,963][04970] Saving new best policy, reward=5.382! |
|
[2025-08-17 19:02:56,363][04989] Updated weights for policy 0, policy_version 230 (0.0013) |
|
[2025-08-17 19:02:56,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3694.4). Total num frames: 942080. Throughput: 0: 969.2. Samples: 233542. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:02:56,957][04426] Avg episode reward: [(0, '5.336')] |
|
[2025-08-17 19:03:01,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3702.2). Total num frames: 962560. Throughput: 0: 966.5. Samples: 240054. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:03:01,960][04426] Avg episode reward: [(0, '5.267')] |
|
[2025-08-17 19:03:06,956][04426] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3694.1). Total num frames: 978944. Throughput: 0: 950.4. Samples: 244946. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:03:06,957][04426] Avg episode reward: [(0, '5.293')] |
|
[2025-08-17 19:03:07,291][04989] Updated weights for policy 0, policy_version 240 (0.0019) |
|
[2025-08-17 19:03:11,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3701.6). Total num frames: 999424. Throughput: 0: 969.4. Samples: 248204. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:03:11,958][04426] Avg episode reward: [(0, '5.510')] |
|
[2025-08-17 19:03:11,961][04970] Saving new best policy, reward=5.510! |
|
[2025-08-17 19:03:16,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3708.7). Total num frames: 1019904. Throughput: 0: 960.9. Samples: 254482. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:03:16,957][04426] Avg episode reward: [(0, '5.260')] |
|
[2025-08-17 19:03:17,800][04989] Updated weights for policy 0, policy_version 250 (0.0020) |
|
[2025-08-17 19:03:21,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3701.0). Total num frames: 1036288. Throughput: 0: 949.4. Samples: 259158. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:03:21,957][04426] Avg episode reward: [(0, '5.377')] |
|
[2025-08-17 19:03:26,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3708.0). Total num frames: 1056768. Throughput: 0: 959.7. Samples: 262392. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:03:26,957][04426] Avg episode reward: [(0, '5.227')] |
|
[2025-08-17 19:03:28,098][04989] Updated weights for policy 0, policy_version 260 (0.0013) |
|
[2025-08-17 19:03:31,959][04426] Fps is (10 sec: 4094.6, 60 sec: 3822.7, 300 sec: 3714.6). Total num frames: 1077248. Throughput: 0: 960.3. Samples: 268842. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:03:31,960][04426] Avg episode reward: [(0, '5.259')] |
|
[2025-08-17 19:03:36,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3823.2, 300 sec: 3707.2). Total num frames: 1093632. Throughput: 0: 962.7. Samples: 273812. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:03:36,957][04426] Avg episode reward: [(0, '5.324')] |
|
[2025-08-17 19:03:38,994][04989] Updated weights for policy 0, policy_version 270 (0.0012) |
|
[2025-08-17 19:03:41,956][04426] Fps is (10 sec: 4097.4, 60 sec: 3891.2, 300 sec: 3790.5). Total num frames: 1118208. Throughput: 0: 968.8. Samples: 277138. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:03:41,957][04426] Avg episode reward: [(0, '5.331')] |
|
[2025-08-17 19:03:46,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 1134592. Throughput: 0: 958.3. Samples: 283176. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:03:46,957][04426] Avg episode reward: [(0, '5.316')] |
|
[2025-08-17 19:03:49,960][04989] Updated weights for policy 0, policy_version 280 (0.0016) |
|
[2025-08-17 19:03:51,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 1155072. Throughput: 0: 967.1. Samples: 288464. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:03:51,957][04426] Avg episode reward: [(0, '5.067')] |
|
[2025-08-17 19:03:56,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 1175552. Throughput: 0: 965.9. Samples: 291668. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:03:56,957][04426] Avg episode reward: [(0, '5.086')] |
|
[2025-08-17 19:04:00,155][04989] Updated weights for policy 0, policy_version 290 (0.0012) |
|
[2025-08-17 19:04:01,957][04426] Fps is (10 sec: 3685.9, 60 sec: 3822.8, 300 sec: 3818.3). Total num frames: 1191936. Throughput: 0: 951.0. Samples: 297278. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:04:01,962][04426] Avg episode reward: [(0, '5.249')] |
|
[2025-08-17 19:04:06,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 1212416. Throughput: 0: 974.2. Samples: 302996. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-17 19:04:06,960][04426] Avg episode reward: [(0, '5.479')] |
|
[2025-08-17 19:04:10,343][04989] Updated weights for policy 0, policy_version 300 (0.0018) |
|
[2025-08-17 19:04:11,956][04426] Fps is (10 sec: 4096.6, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 1232896. Throughput: 0: 973.6. Samples: 306202. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-08-17 19:04:11,960][04426] Avg episode reward: [(0, '5.154')] |
|
[2025-08-17 19:04:16,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 1249280. Throughput: 0: 947.1. Samples: 311460. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:04:16,960][04426] Avg episode reward: [(0, '4.923')] |
|
[2025-08-17 19:04:16,968][04970] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000305_1249280.pth... |
|
[2025-08-17 19:04:17,070][04970] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000081_331776.pth |
|
[2025-08-17 19:04:21,644][04989] Updated weights for policy 0, policy_version 310 (0.0022) |
|
[2025-08-17 19:04:21,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 1269760. Throughput: 0: 968.1. Samples: 317378. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:04:21,957][04426] Avg episode reward: [(0, '5.121')] |
|
[2025-08-17 19:04:26,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 1290240. Throughput: 0: 965.0. Samples: 320562. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:04:26,957][04426] Avg episode reward: [(0, '5.442')] |
|
[2025-08-17 19:04:31,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3804.4). Total num frames: 1306624. Throughput: 0: 937.2. Samples: 325348. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:04:31,961][04426] Avg episode reward: [(0, '5.831')] |
|
[2025-08-17 19:04:31,963][04970] Saving new best policy, reward=5.831! |
|
[2025-08-17 19:04:32,779][04989] Updated weights for policy 0, policy_version 320 (0.0014) |
|
[2025-08-17 19:04:36,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 1327104. Throughput: 0: 959.7. Samples: 331650. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:04:36,960][04426] Avg episode reward: [(0, '6.145')] |
|
[2025-08-17 19:04:36,970][04970] Saving new best policy, reward=6.145! |
|
[2025-08-17 19:04:41,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 1343488. Throughput: 0: 959.2. Samples: 334834. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:04:41,959][04426] Avg episode reward: [(0, '5.917')] |
|
[2025-08-17 19:04:43,756][04989] Updated weights for policy 0, policy_version 330 (0.0017) |
|
[2025-08-17 19:04:46,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1363968. Throughput: 0: 941.7. Samples: 339652. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:04:46,957][04426] Avg episode reward: [(0, '5.683')] |
|
[2025-08-17 19:04:51,956][04426] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1384448. Throughput: 0: 952.1. Samples: 345842. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:04:51,957][04426] Avg episode reward: [(0, '5.844')] |
|
[2025-08-17 19:04:53,902][04989] Updated weights for policy 0, policy_version 340 (0.0017) |
|
[2025-08-17 19:04:56,956][04426] Fps is (10 sec: 3686.2, 60 sec: 3754.6, 300 sec: 3818.3). Total num frames: 1400832. Throughput: 0: 949.6. Samples: 348936. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:04:56,959][04426] Avg episode reward: [(0, '5.928')] |
|
[2025-08-17 19:05:01,956][04426] Fps is (10 sec: 3686.3, 60 sec: 3823.0, 300 sec: 3818.3). Total num frames: 1421312. Throughput: 0: 939.5. Samples: 353736. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:05:01,959][04426] Avg episode reward: [(0, '6.742')] |
|
[2025-08-17 19:05:01,963][04970] Saving new best policy, reward=6.742! |
|
[2025-08-17 19:05:04,933][04989] Updated weights for policy 0, policy_version 350 (0.0013) |
|
[2025-08-17 19:05:06,956][04426] Fps is (10 sec: 4096.2, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1441792. Throughput: 0: 949.1. Samples: 360086. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:05:06,962][04426] Avg episode reward: [(0, '7.018')] |
|
[2025-08-17 19:05:06,969][04970] Saving new best policy, reward=7.018! |
|
[2025-08-17 19:05:11,956][04426] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 1454080. Throughput: 0: 940.5. Samples: 362884. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:05:11,960][04426] Avg episode reward: [(0, '7.104')] |
|
[2025-08-17 19:05:11,964][04970] Saving new best policy, reward=7.104! |
|
[2025-08-17 19:05:16,222][04989] Updated weights for policy 0, policy_version 360 (0.0012) |
|
[2025-08-17 19:05:16,956][04426] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 1474560. Throughput: 0: 945.9. Samples: 367914. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:05:16,961][04426] Avg episode reward: [(0, '7.152')] |
|
[2025-08-17 19:05:16,967][04970] Saving new best policy, reward=7.152! |
|
[2025-08-17 19:05:21,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 1495040. Throughput: 0: 947.8. Samples: 374302. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:05:21,959][04426] Avg episode reward: [(0, '7.837')] |
|
[2025-08-17 19:05:22,020][04970] Saving new best policy, reward=7.837! |
|
[2025-08-17 19:05:26,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 1511424. Throughput: 0: 932.7. Samples: 376806. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-08-17 19:05:26,957][04426] Avg episode reward: [(0, '7.950')] |
|
[2025-08-17 19:05:26,962][04970] Saving new best policy, reward=7.950! |
|
[2025-08-17 19:05:27,264][04989] Updated weights for policy 0, policy_version 370 (0.0029) |
|
[2025-08-17 19:05:31,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 1531904. Throughput: 0: 948.8. Samples: 382350. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:05:31,959][04426] Avg episode reward: [(0, '8.143')] |
|
[2025-08-17 19:05:31,962][04970] Saving new best policy, reward=8.143! |
|
[2025-08-17 19:05:36,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 1552384. Throughput: 0: 954.4. Samples: 388792. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:05:36,957][04426] Avg episode reward: [(0, '8.227')] |
|
[2025-08-17 19:05:36,965][04970] Saving new best policy, reward=8.227! |
|
[2025-08-17 19:05:37,212][04989] Updated weights for policy 0, policy_version 380 (0.0013) |
|
[2025-08-17 19:05:41,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 1568768. Throughput: 0: 929.3. Samples: 390756. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:05:41,960][04426] Avg episode reward: [(0, '7.489')] |
|
[2025-08-17 19:05:46,959][04426] Fps is (10 sec: 3685.1, 60 sec: 3754.5, 300 sec: 3818.3). Total num frames: 1589248. Throughput: 0: 953.7. Samples: 396656. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:05:46,961][04426] Avg episode reward: [(0, '7.147')] |
|
[2025-08-17 19:05:48,240][04989] Updated weights for policy 0, policy_version 390 (0.0013) |
|
[2025-08-17 19:05:51,959][04426] Fps is (10 sec: 4094.6, 60 sec: 3754.5, 300 sec: 3818.3). Total num frames: 1609728. Throughput: 0: 946.5. Samples: 402680. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-17 19:05:51,963][04426] Avg episode reward: [(0, '6.990')] |
|
[2025-08-17 19:05:56,957][04426] Fps is (10 sec: 3687.1, 60 sec: 3754.6, 300 sec: 3818.3). Total num frames: 1626112. Throughput: 0: 930.6. Samples: 404762. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:05:56,959][04426] Avg episode reward: [(0, '7.413')] |
|
[2025-08-17 19:05:59,415][04989] Updated weights for policy 0, policy_version 400 (0.0019) |
|
[2025-08-17 19:06:01,956][04426] Fps is (10 sec: 3687.6, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 1646592. Throughput: 0: 955.6. Samples: 410914. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:06:01,957][04426] Avg episode reward: [(0, '7.542')] |
|
[2025-08-17 19:06:06,956][04426] Fps is (10 sec: 4096.6, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 1667072. Throughput: 0: 947.3. Samples: 416930. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:06:06,957][04426] Avg episode reward: [(0, '8.538')] |
|
[2025-08-17 19:06:06,964][04970] Saving new best policy, reward=8.538! |
|
[2025-08-17 19:06:10,236][04989] Updated weights for policy 0, policy_version 410 (0.0020) |
|
[2025-08-17 19:06:11,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1683456. Throughput: 0: 941.0. Samples: 419150. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:06:11,957][04426] Avg episode reward: [(0, '9.669')] |
|
[2025-08-17 19:06:11,958][04970] Saving new best policy, reward=9.669! |
|
[2025-08-17 19:06:16,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1703936. Throughput: 0: 958.7. Samples: 425492. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:06:16,957][04426] Avg episode reward: [(0, '10.605')] |
|
[2025-08-17 19:06:17,045][04970] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000417_1708032.pth... |
|
[2025-08-17 19:06:17,155][04970] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000193_790528.pth |
|
[2025-08-17 19:06:17,163][04970] Saving new best policy, reward=10.605! |
|
[2025-08-17 19:06:20,582][04989] Updated weights for policy 0, policy_version 420 (0.0018) |
|
[2025-08-17 19:06:21,958][04426] Fps is (10 sec: 3685.4, 60 sec: 3754.5, 300 sec: 3818.3). Total num frames: 1720320. Throughput: 0: 933.0. Samples: 430780. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:06:21,960][04426] Avg episode reward: [(0, '11.627')] |
|
[2025-08-17 19:06:21,966][04970] Saving new best policy, reward=11.627! |
|
[2025-08-17 19:06:26,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1740800. Throughput: 0: 947.0. Samples: 433370. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:06:26,957][04426] Avg episode reward: [(0, '12.300')] |
|
[2025-08-17 19:06:26,969][04970] Saving new best policy, reward=12.300! |
|
[2025-08-17 19:06:31,084][04989] Updated weights for policy 0, policy_version 430 (0.0017) |
|
[2025-08-17 19:06:31,956][04426] Fps is (10 sec: 4097.0, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1761280. Throughput: 0: 958.0. Samples: 439762. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:06:31,960][04426] Avg episode reward: [(0, '11.662')] |
|
[2025-08-17 19:06:36,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 1777664. Throughput: 0: 938.5. Samples: 444908. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:06:36,957][04426] Avg episode reward: [(0, '11.077')] |
|
[2025-08-17 19:06:41,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1798144. Throughput: 0: 958.2. Samples: 447880. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:06:41,957][04426] Avg episode reward: [(0, '10.133')] |
|
[2025-08-17 19:06:41,971][04989] Updated weights for policy 0, policy_version 440 (0.0013) |
|
[2025-08-17 19:06:46,956][04426] Fps is (10 sec: 4505.4, 60 sec: 3891.4, 300 sec: 3832.2). Total num frames: 1822720. Throughput: 0: 964.1. Samples: 454300. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:06:46,957][04426] Avg episode reward: [(0, '11.738')] |
|
[2025-08-17 19:06:51,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.9, 300 sec: 3818.3). Total num frames: 1835008. Throughput: 0: 935.3. Samples: 459018. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:06:51,958][04426] Avg episode reward: [(0, '12.491')] |
|
[2025-08-17 19:06:51,961][04970] Saving new best policy, reward=12.491! |
|
[2025-08-17 19:06:53,313][04989] Updated weights for policy 0, policy_version 450 (0.0014) |
|
[2025-08-17 19:06:56,956][04426] Fps is (10 sec: 3277.0, 60 sec: 3823.0, 300 sec: 3804.4). Total num frames: 1855488. Throughput: 0: 956.4. Samples: 462190. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:06:56,957][04426] Avg episode reward: [(0, '12.941')] |
|
[2025-08-17 19:06:56,962][04970] Saving new best policy, reward=12.941! |
|
[2025-08-17 19:07:01,960][04426] Fps is (10 sec: 4094.2, 60 sec: 3822.7, 300 sec: 3818.2). Total num frames: 1875968. Throughput: 0: 959.4. Samples: 468668. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:07:01,961][04426] Avg episode reward: [(0, '13.494')] |
|
[2025-08-17 19:07:01,963][04970] Saving new best policy, reward=13.494! |
|
[2025-08-17 19:07:04,088][04989] Updated weights for policy 0, policy_version 460 (0.0016) |
|
[2025-08-17 19:07:06,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 1892352. Throughput: 0: 946.7. Samples: 473380. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:07:06,957][04426] Avg episode reward: [(0, '13.424')] |
|
[2025-08-17 19:07:11,956][04426] Fps is (10 sec: 4097.8, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 1916928. Throughput: 0: 962.7. Samples: 476692. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:07:11,959][04426] Avg episode reward: [(0, '13.342')] |
|
[2025-08-17 19:07:13,825][04989] Updated weights for policy 0, policy_version 470 (0.0014) |
|
[2025-08-17 19:07:16,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1933312. Throughput: 0: 963.0. Samples: 483098. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:07:16,959][04426] Avg episode reward: [(0, '13.237')] |
|
[2025-08-17 19:07:21,956][04426] Fps is (10 sec: 3276.8, 60 sec: 3823.1, 300 sec: 3804.4). Total num frames: 1949696. Throughput: 0: 957.6. Samples: 487998. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:07:21,957][04426] Avg episode reward: [(0, '13.277')] |
|
[2025-08-17 19:07:24,867][04989] Updated weights for policy 0, policy_version 480 (0.0013) |
|
[2025-08-17 19:07:26,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 1974272. Throughput: 0: 964.2. Samples: 491270. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:07:26,957][04426] Avg episode reward: [(0, '12.644')] |
|
[2025-08-17 19:07:31,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3818.4). Total num frames: 1990656. Throughput: 0: 957.5. Samples: 497388. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:07:31,961][04426] Avg episode reward: [(0, '13.357')] |
|
[2025-08-17 19:07:35,566][04989] Updated weights for policy 0, policy_version 490 (0.0015) |
|
[2025-08-17 19:07:36,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2011136. Throughput: 0: 974.2. Samples: 502856. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:07:36,957][04426] Avg episode reward: [(0, '14.039')] |
|
[2025-08-17 19:07:36,964][04970] Saving new best policy, reward=14.039! |
|
[2025-08-17 19:07:41,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2031616. Throughput: 0: 972.6. Samples: 505956. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:07:41,957][04426] Avg episode reward: [(0, '15.063')] |
|
[2025-08-17 19:07:41,964][04970] Saving new best policy, reward=15.063! |
|
[2025-08-17 19:07:46,671][04989] Updated weights for policy 0, policy_version 500 (0.0024) |
|
[2025-08-17 19:07:46,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 2048000. Throughput: 0: 951.2. Samples: 511468. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:07:46,957][04426] Avg episode reward: [(0, '15.300')] |
|
[2025-08-17 19:07:46,965][04970] Saving new best policy, reward=15.300! |
|
[2025-08-17 19:07:51,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2068480. Throughput: 0: 967.2. Samples: 516904. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:07:51,957][04426] Avg episode reward: [(0, '15.151')] |
|
[2025-08-17 19:07:56,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2084864. Throughput: 0: 956.1. Samples: 519718. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:07:56,957][04426] Avg episode reward: [(0, '15.763')] |
|
[2025-08-17 19:07:56,971][04970] Saving new best policy, reward=15.763! |
|
[2025-08-17 19:07:57,161][04989] Updated weights for policy 0, policy_version 510 (0.0019) |
|
[2025-08-17 19:08:01,956][04426] Fps is (10 sec: 3276.8, 60 sec: 3754.9, 300 sec: 3804.4). Total num frames: 2101248. Throughput: 0: 925.1. Samples: 524728. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:08:01,959][04426] Avg episode reward: [(0, '15.756')] |
|
[2025-08-17 19:08:06,956][04426] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2121728. Throughput: 0: 946.8. Samples: 530604. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:08:06,957][04426] Avg episode reward: [(0, '14.440')] |
|
[2025-08-17 19:08:08,306][04989] Updated weights for policy 0, policy_version 520 (0.0017) |
|
[2025-08-17 19:08:11,957][04426] Fps is (10 sec: 4095.6, 60 sec: 3754.6, 300 sec: 3804.4). Total num frames: 2142208. Throughput: 0: 944.1. Samples: 533754. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:08:11,960][04426] Avg episode reward: [(0, '16.605')] |
|
[2025-08-17 19:08:11,964][04970] Saving new best policy, reward=16.605! |
|
[2025-08-17 19:08:16,956][04426] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 2158592. Throughput: 0: 917.3. Samples: 538668. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:08:16,960][04426] Avg episode reward: [(0, '15.211')] |
|
[2025-08-17 19:08:16,983][04970] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000527_2158592.pth... |
|
[2025-08-17 19:08:16,980][04426] Components not started: RolloutWorker_w1, RolloutWorker_w2, RolloutWorker_w3, wait_time=600.0 seconds |
|
[2025-08-17 19:08:17,076][04970] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000305_1249280.pth |
|
[2025-08-17 19:08:19,550][04989] Updated weights for policy 0, policy_version 530 (0.0021) |
|
[2025-08-17 19:08:21,956][04426] Fps is (10 sec: 3686.8, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2179072. Throughput: 0: 932.8. Samples: 544832. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:08:21,957][04426] Avg episode reward: [(0, '15.441')] |
|
[2025-08-17 19:08:26,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3790.6). Total num frames: 2195456. Throughput: 0: 931.7. Samples: 547884. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:08:26,957][04426] Avg episode reward: [(0, '17.169')] |
|
[2025-08-17 19:08:26,983][04970] Saving new best policy, reward=17.169! |
|
[2025-08-17 19:08:30,995][04989] Updated weights for policy 0, policy_version 540 (0.0018) |
|
[2025-08-17 19:08:31,956][04426] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3790.5). Total num frames: 2211840. Throughput: 0: 913.0. Samples: 552552. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:08:31,957][04426] Avg episode reward: [(0, '17.460')] |
|
[2025-08-17 19:08:32,011][04970] Saving new best policy, reward=17.460! |
|
[2025-08-17 19:08:36,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2236416. Throughput: 0: 933.3. Samples: 558902. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:08:36,960][04426] Avg episode reward: [(0, '16.387')] |
|
[2025-08-17 19:08:41,558][04989] Updated weights for policy 0, policy_version 550 (0.0031) |
|
[2025-08-17 19:08:41,959][04426] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3790.5). Total num frames: 2252800. Throughput: 0: 939.6. Samples: 561998. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:08:41,960][04426] Avg episode reward: [(0, '17.453')] |
|
[2025-08-17 19:08:46,956][04426] Fps is (10 sec: 3276.7, 60 sec: 3686.4, 300 sec: 3776.6). Total num frames: 2269184. Throughput: 0: 927.7. Samples: 566476. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:08:46,957][04426] Avg episode reward: [(0, '18.478')] |
|
[2025-08-17 19:08:46,963][04970] Saving new best policy, reward=18.478! |
|
[2025-08-17 19:08:51,958][04426] Fps is (10 sec: 3685.5, 60 sec: 3686.3, 300 sec: 3776.6). Total num frames: 2289664. Throughput: 0: 937.7. Samples: 572802. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:08:51,959][04426] Avg episode reward: [(0, '19.041')] |
|
[2025-08-17 19:08:51,961][04970] Saving new best policy, reward=19.041! |
|
[2025-08-17 19:08:52,165][04989] Updated weights for policy 0, policy_version 560 (0.0013) |
|
[2025-08-17 19:08:56,956][04426] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3776.7). Total num frames: 2306048. Throughput: 0: 933.3. Samples: 575750. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:08:56,957][04426] Avg episode reward: [(0, '19.998')] |
|
[2025-08-17 19:08:56,968][04970] Saving new best policy, reward=19.998! |
|
[2025-08-17 19:09:01,956][04426] Fps is (10 sec: 3687.3, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2326528. Throughput: 0: 935.0. Samples: 580744. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:09:01,957][04426] Avg episode reward: [(0, '21.142')] |
|
[2025-08-17 19:09:01,958][04970] Saving new best policy, reward=21.142! |
|
[2025-08-17 19:09:03,342][04989] Updated weights for policy 0, policy_version 570 (0.0023) |
|
[2025-08-17 19:09:06,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2347008. Throughput: 0: 940.5. Samples: 587156. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:09:06,957][04426] Avg episode reward: [(0, '20.455')] |
|
[2025-08-17 19:09:11,957][04426] Fps is (10 sec: 3685.9, 60 sec: 3686.4, 300 sec: 3776.6). Total num frames: 2363392. Throughput: 0: 933.8. Samples: 589908. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:09:11,958][04426] Avg episode reward: [(0, '20.414')] |
|
[2025-08-17 19:09:14,431][04989] Updated weights for policy 0, policy_version 580 (0.0012) |
|
[2025-08-17 19:09:16,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3776.6). Total num frames: 2383872. Throughput: 0: 947.4. Samples: 595186. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:09:16,957][04426] Avg episode reward: [(0, '20.148')] |
|
[2025-08-17 19:09:21,956][04426] Fps is (10 sec: 4506.2, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 2408448. Throughput: 0: 951.4. Samples: 601716. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:09:21,957][04426] Avg episode reward: [(0, '19.641')] |
|
[2025-08-17 19:09:24,846][04989] Updated weights for policy 0, policy_version 590 (0.0020) |
|
[2025-08-17 19:09:26,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2420736. Throughput: 0: 934.7. Samples: 604058. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:09:26,957][04426] Avg episode reward: [(0, '21.291')] |
|
[2025-08-17 19:09:26,962][04970] Saving new best policy, reward=21.291! |
|
[2025-08-17 19:09:31,956][04426] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 2441216. Throughput: 0: 959.6. Samples: 609656. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:09:31,957][04426] Avg episode reward: [(0, '22.229')] |
|
[2025-08-17 19:09:31,958][04970] Saving new best policy, reward=22.229! |
|
[2025-08-17 19:09:34,858][04989] Updated weights for policy 0, policy_version 600 (0.0013) |
|
[2025-08-17 19:09:36,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2461696. Throughput: 0: 962.7. Samples: 616122. Policy #0 lag: (min: 0.0, avg: 0.2, max: 2.0) |
|
[2025-08-17 19:09:36,959][04426] Avg episode reward: [(0, '22.452')] |
|
[2025-08-17 19:09:36,968][04970] Saving new best policy, reward=22.452! |
|
[2025-08-17 19:09:41,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2478080. Throughput: 0: 940.1. Samples: 618056. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:09:41,960][04426] Avg episode reward: [(0, '23.023')] |
|
[2025-08-17 19:09:41,964][04970] Saving new best policy, reward=23.023! |
|
[2025-08-17 19:09:46,170][04989] Updated weights for policy 0, policy_version 610 (0.0017) |
|
[2025-08-17 19:09:46,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3776.7). Total num frames: 2498560. Throughput: 0: 962.2. Samples: 624044. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:09:46,957][04426] Avg episode reward: [(0, '21.621')] |
|
[2025-08-17 19:09:51,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3823.1, 300 sec: 3790.5). Total num frames: 2519040. Throughput: 0: 954.7. Samples: 630118. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-17 19:09:51,960][04426] Avg episode reward: [(0, '21.665')] |
|
[2025-08-17 19:09:56,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 2535424. Throughput: 0: 938.1. Samples: 632122. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:09:56,960][04426] Avg episode reward: [(0, '21.481')] |
|
[2025-08-17 19:09:57,037][04989] Updated weights for policy 0, policy_version 620 (0.0020) |
|
[2025-08-17 19:10:01,958][04426] Fps is (10 sec: 3685.6, 60 sec: 3822.8, 300 sec: 3776.6). Total num frames: 2555904. Throughput: 0: 964.4. Samples: 638586. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:10:01,961][04426] Avg episode reward: [(0, '21.393')] |
|
[2025-08-17 19:10:06,960][04426] Fps is (10 sec: 4094.4, 60 sec: 3822.7, 300 sec: 3804.4). Total num frames: 2576384. Throughput: 0: 948.1. Samples: 644384. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:10:06,961][04426] Avg episode reward: [(0, '20.544')] |
|
[2025-08-17 19:10:07,780][04989] Updated weights for policy 0, policy_version 630 (0.0022) |
|
[2025-08-17 19:10:11,956][04426] Fps is (10 sec: 4096.8, 60 sec: 3891.3, 300 sec: 3804.4). Total num frames: 2596864. Throughput: 0: 949.9. Samples: 646802. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:10:11,957][04426] Avg episode reward: [(0, '21.401')] |
|
[2025-08-17 19:10:16,956][04426] Fps is (10 sec: 4097.6, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 2617344. Throughput: 0: 969.4. Samples: 653278. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:10:16,958][04426] Avg episode reward: [(0, '20.125')] |
|
[2025-08-17 19:10:16,965][04970] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000639_2617344.pth... |
|
[2025-08-17 19:10:17,066][04970] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000417_1708032.pth |
|
[2025-08-17 19:10:17,637][04989] Updated weights for policy 0, policy_version 640 (0.0015) |
|
[2025-08-17 19:10:21,956][04426] Fps is (10 sec: 3686.3, 60 sec: 3754.6, 300 sec: 3804.4). Total num frames: 2633728. Throughput: 0: 939.4. Samples: 658396. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:10:21,957][04426] Avg episode reward: [(0, '20.686')] |
|
[2025-08-17 19:10:26,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 2654208. Throughput: 0: 959.3. Samples: 661226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-17 19:10:26,959][04426] Avg episode reward: [(0, '19.885')] |
|
[2025-08-17 19:10:28,805][04989] Updated weights for policy 0, policy_version 650 (0.0015) |
|
[2025-08-17 19:10:31,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 2674688. Throughput: 0: 969.2. Samples: 667658. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:10:31,960][04426] Avg episode reward: [(0, '20.269')] |
|
[2025-08-17 19:10:36,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2691072. Throughput: 0: 943.2. Samples: 672560. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:10:36,957][04426] Avg episode reward: [(0, '22.093')] |
|
[2025-08-17 19:10:39,911][04989] Updated weights for policy 0, policy_version 660 (0.0018) |
|
[2025-08-17 19:10:41,956][04426] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3804.5). Total num frames: 2711552. Throughput: 0: 966.9. Samples: 675632. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-08-17 19:10:41,957][04426] Avg episode reward: [(0, '22.746')] |
|
[2025-08-17 19:10:46,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3804.5). Total num frames: 2732032. Throughput: 0: 965.8. Samples: 682046. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:10:46,957][04426] Avg episode reward: [(0, '25.043')] |
|
[2025-08-17 19:10:46,966][04970] Saving new best policy, reward=25.043! |
|
[2025-08-17 19:10:50,906][04989] Updated weights for policy 0, policy_version 670 (0.0012) |
|
[2025-08-17 19:10:51,956][04426] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3790.6). Total num frames: 2744320. Throughput: 0: 940.2. Samples: 686690. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:10:51,959][04426] Avg episode reward: [(0, '24.226')] |
|
[2025-08-17 19:10:56,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 2768896. Throughput: 0: 956.8. Samples: 689860. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:10:56,957][04426] Avg episode reward: [(0, '24.573')] |
|
[2025-08-17 19:11:00,577][04989] Updated weights for policy 0, policy_version 680 (0.0024) |
|
[2025-08-17 19:11:01,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3823.1, 300 sec: 3790.5). Total num frames: 2785280. Throughput: 0: 956.4. Samples: 696314. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:11:01,960][04426] Avg episode reward: [(0, '24.557')] |
|
[2025-08-17 19:11:06,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3823.2, 300 sec: 3804.4). Total num frames: 2805760. Throughput: 0: 950.5. Samples: 701170. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:11:06,962][04426] Avg episode reward: [(0, '23.461')] |
|
[2025-08-17 19:11:11,656][04989] Updated weights for policy 0, policy_version 690 (0.0022) |
|
[2025-08-17 19:11:11,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2826240. Throughput: 0: 958.8. Samples: 704372. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:11:11,957][04426] Avg episode reward: [(0, '23.020')] |
|
[2025-08-17 19:11:16,959][04426] Fps is (10 sec: 3685.3, 60 sec: 3754.5, 300 sec: 3804.4). Total num frames: 2842624. Throughput: 0: 950.7. Samples: 710442. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:11:16,960][04426] Avg episode reward: [(0, '22.791')] |
|
[2025-08-17 19:11:21,956][04426] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2863104. Throughput: 0: 955.5. Samples: 715558. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:11:21,957][04426] Avg episode reward: [(0, '22.659')] |
|
[2025-08-17 19:11:22,649][04989] Updated weights for policy 0, policy_version 700 (0.0013) |
|
[2025-08-17 19:11:26,956][04426] Fps is (10 sec: 4097.2, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2883584. Throughput: 0: 956.9. Samples: 718694. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:11:26,957][04426] Avg episode reward: [(0, '22.527')] |
|
[2025-08-17 19:11:31,957][04426] Fps is (10 sec: 3686.0, 60 sec: 3754.6, 300 sec: 3804.4). Total num frames: 2899968. Throughput: 0: 939.3. Samples: 724314. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:11:31,962][04426] Avg episode reward: [(0, '22.524')] |
|
[2025-08-17 19:11:34,004][04989] Updated weights for policy 0, policy_version 710 (0.0014) |
|
[2025-08-17 19:11:36,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2920448. Throughput: 0: 961.3. Samples: 729950. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:11:36,963][04426] Avg episode reward: [(0, '21.615')] |
|
[2025-08-17 19:11:41,956][04426] Fps is (10 sec: 4096.6, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 2940928. Throughput: 0: 964.0. Samples: 733238. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:11:41,963][04426] Avg episode reward: [(0, '21.469')] |
|
[2025-08-17 19:11:44,005][04989] Updated weights for policy 0, policy_version 720 (0.0014) |
|
[2025-08-17 19:11:46,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 2957312. Throughput: 0: 937.1. Samples: 738484. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:11:46,957][04426] Avg episode reward: [(0, '21.517')] |
|
[2025-08-17 19:11:51,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 2977792. Throughput: 0: 962.4. Samples: 744476. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:11:51,957][04426] Avg episode reward: [(0, '22.891')] |
|
[2025-08-17 19:11:54,391][04989] Updated weights for policy 0, policy_version 730 (0.0019) |
|
[2025-08-17 19:11:56,958][04426] Fps is (10 sec: 4094.9, 60 sec: 3822.8, 300 sec: 3804.4). Total num frames: 2998272. Throughput: 0: 964.0. Samples: 747756. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:11:56,960][04426] Avg episode reward: [(0, '22.643')] |
|
[2025-08-17 19:12:01,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3014656. Throughput: 0: 935.1. Samples: 752518. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:12:01,960][04426] Avg episode reward: [(0, '23.675')] |
|
[2025-08-17 19:12:05,552][04989] Updated weights for policy 0, policy_version 740 (0.0013) |
|
[2025-08-17 19:12:06,956][04426] Fps is (10 sec: 3687.2, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 3035136. Throughput: 0: 964.9. Samples: 758980. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:12:06,957][04426] Avg episode reward: [(0, '23.854')] |
|
[2025-08-17 19:12:11,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3055616. Throughput: 0: 966.9. Samples: 762206. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:12:11,957][04426] Avg episode reward: [(0, '23.895')] |
|
[2025-08-17 19:12:16,427][04989] Updated weights for policy 0, policy_version 750 (0.0013) |
|
[2025-08-17 19:12:16,956][04426] Fps is (10 sec: 3686.5, 60 sec: 3823.1, 300 sec: 3804.4). Total num frames: 3072000. Throughput: 0: 951.5. Samples: 767130. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:12:16,957][04426] Avg episode reward: [(0, '23.773')] |
|
[2025-08-17 19:12:16,962][04970] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000750_3072000.pth... |
|
[2025-08-17 19:12:17,048][04970] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000527_2158592.pth |
|
[2025-08-17 19:12:21,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3790.5). Total num frames: 3092480. Throughput: 0: 968.2. Samples: 773520. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:12:21,957][04426] Avg episode reward: [(0, '23.225')] |
|
[2025-08-17 19:12:26,181][04989] Updated weights for policy 0, policy_version 760 (0.0017) |
|
[2025-08-17 19:12:26,958][04426] Fps is (10 sec: 4095.0, 60 sec: 3822.8, 300 sec: 3804.4). Total num frames: 3112960. Throughput: 0: 968.2. Samples: 776810. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:12:26,959][04426] Avg episode reward: [(0, '22.179')] |
|
[2025-08-17 19:12:31,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3790.5). Total num frames: 3129344. Throughput: 0: 959.7. Samples: 781672. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:12:31,961][04426] Avg episode reward: [(0, '22.164')] |
|
[2025-08-17 19:12:36,883][04989] Updated weights for policy 0, policy_version 770 (0.0021) |
|
[2025-08-17 19:12:36,956][04426] Fps is (10 sec: 4097.0, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 3153920. Throughput: 0: 972.9. Samples: 788258. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:12:36,957][04426] Avg episode reward: [(0, '22.799')] |
|
[2025-08-17 19:12:41,957][04426] Fps is (10 sec: 4095.4, 60 sec: 3822.8, 300 sec: 3804.4). Total num frames: 3170304. Throughput: 0: 968.7. Samples: 791348. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:12:41,961][04426] Avg episode reward: [(0, '23.137')] |
|
[2025-08-17 19:12:46,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 3190784. Throughput: 0: 973.1. Samples: 796306. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:12:46,957][04426] Avg episode reward: [(0, '23.902')] |
|
[2025-08-17 19:12:47,866][04989] Updated weights for policy 0, policy_version 780 (0.0016) |
|
[2025-08-17 19:12:51,956][04426] Fps is (10 sec: 4096.6, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3211264. Throughput: 0: 974.9. Samples: 802848. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:12:51,957][04426] Avg episode reward: [(0, '25.188')] |
|
[2025-08-17 19:12:51,962][04970] Saving new best policy, reward=25.188! |
|
[2025-08-17 19:12:56,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3818.3). Total num frames: 3227648. Throughput: 0: 961.4. Samples: 805470. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:12:56,958][04426] Avg episode reward: [(0, '25.878')] |
|
[2025-08-17 19:12:56,969][04970] Saving new best policy, reward=25.878! |
|
[2025-08-17 19:12:59,056][04989] Updated weights for policy 0, policy_version 790 (0.0016) |
|
[2025-08-17 19:13:01,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3248128. Throughput: 0: 966.3. Samples: 810614. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:13:01,960][04426] Avg episode reward: [(0, '25.722')] |
|
[2025-08-17 19:13:06,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3268608. Throughput: 0: 967.7. Samples: 817068. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:13:06,960][04426] Avg episode reward: [(0, '25.640')] |
|
[2025-08-17 19:13:09,137][04989] Updated weights for policy 0, policy_version 800 (0.0016) |
|
[2025-08-17 19:13:11,956][04426] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 3280896. Throughput: 0: 949.1. Samples: 819516. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:13:11,962][04426] Avg episode reward: [(0, '23.743')] |
|
[2025-08-17 19:13:16,956][04426] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3305472. Throughput: 0: 964.7. Samples: 825084. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:13:16,960][04426] Avg episode reward: [(0, '24.960')] |
|
[2025-08-17 19:13:19,644][04989] Updated weights for policy 0, policy_version 810 (0.0024) |
|
[2025-08-17 19:13:21,960][04426] Fps is (10 sec: 4503.6, 60 sec: 3890.9, 300 sec: 3832.1). Total num frames: 3325952. Throughput: 0: 964.8. Samples: 831678. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:13:21,962][04426] Avg episode reward: [(0, '24.853')] |
|
[2025-08-17 19:13:26,956][04426] Fps is (10 sec: 3686.3, 60 sec: 3823.1, 300 sec: 3832.2). Total num frames: 3342336. Throughput: 0: 942.2. Samples: 833744. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:13:26,958][04426] Avg episode reward: [(0, '25.092')] |
|
[2025-08-17 19:13:30,540][04989] Updated weights for policy 0, policy_version 820 (0.0018) |
|
[2025-08-17 19:13:31,956][04426] Fps is (10 sec: 3688.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3362816. Throughput: 0: 964.0. Samples: 839686. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:13:31,960][04426] Avg episode reward: [(0, '25.854')] |
|
[2025-08-17 19:13:36,956][04426] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3383296. Throughput: 0: 951.3. Samples: 845656. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:13:36,957][04426] Avg episode reward: [(0, '26.176')] |
|
[2025-08-17 19:13:36,965][04970] Saving new best policy, reward=26.176! |
|
[2025-08-17 19:13:41,805][04989] Updated weights for policy 0, policy_version 830 (0.0016) |
|
[2025-08-17 19:13:41,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3832.2). Total num frames: 3399680. Throughput: 0: 936.0. Samples: 847590. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:13:41,963][04426] Avg episode reward: [(0, '26.279')] |
|
[2025-08-17 19:13:41,964][04970] Saving new best policy, reward=26.279! |
|
[2025-08-17 19:13:46,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3420160. Throughput: 0: 963.4. Samples: 853968. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:13:46,961][04426] Avg episode reward: [(0, '24.779')] |
|
[2025-08-17 19:13:51,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 3436544. Throughput: 0: 947.1. Samples: 859686. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:13:51,957][04426] Avg episode reward: [(0, '22.954')] |
|
[2025-08-17 19:13:52,333][04989] Updated weights for policy 0, policy_version 840 (0.0019) |
|
[2025-08-17 19:13:56,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3457024. Throughput: 0: 945.6. Samples: 862070. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:13:56,961][04426] Avg episode reward: [(0, '22.721')] |
|
[2025-08-17 19:14:01,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3477504. Throughput: 0: 964.8. Samples: 868500. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:14:01,961][04426] Avg episode reward: [(0, '21.547')] |
|
[2025-08-17 19:14:02,318][04989] Updated weights for policy 0, policy_version 850 (0.0017) |
|
[2025-08-17 19:14:06,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 3493888. Throughput: 0: 938.6. Samples: 873912. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:14:06,957][04426] Avg episode reward: [(0, '21.938')] |
|
[2025-08-17 19:14:11,956][04426] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3514368. Throughput: 0: 952.9. Samples: 876624. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:14:11,961][04426] Avg episode reward: [(0, '22.492')] |
|
[2025-08-17 19:14:13,382][04989] Updated weights for policy 0, policy_version 860 (0.0013) |
|
[2025-08-17 19:14:16,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3534848. Throughput: 0: 963.0. Samples: 883022. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:14:16,960][04426] Avg episode reward: [(0, '22.948')] |
|
[2025-08-17 19:14:16,967][04970] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000863_3534848.pth... |
|
[2025-08-17 19:14:17,070][04970] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000639_2617344.pth |
|
[2025-08-17 19:14:21,956][04426] Fps is (10 sec: 3686.5, 60 sec: 3754.9, 300 sec: 3832.2). Total num frames: 3551232. Throughput: 0: 939.8. Samples: 887948. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:14:21,957][04426] Avg episode reward: [(0, '23.114')] |
|
[2025-08-17 19:14:24,403][04989] Updated weights for policy 0, policy_version 870 (0.0021) |
|
[2025-08-17 19:14:26,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3832.2). Total num frames: 3571712. Throughput: 0: 965.7. Samples: 891048. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:14:26,960][04426] Avg episode reward: [(0, '22.833')] |
|
[2025-08-17 19:14:31,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3592192. Throughput: 0: 966.8. Samples: 897472. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:14:31,960][04426] Avg episode reward: [(0, '22.360')] |
|
[2025-08-17 19:14:35,165][04989] Updated weights for policy 0, policy_version 880 (0.0014) |
|
[2025-08-17 19:14:36,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 3608576. Throughput: 0: 949.7. Samples: 902424. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:14:36,963][04426] Avg episode reward: [(0, '23.453')] |
|
[2025-08-17 19:14:41,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3629056. Throughput: 0: 968.3. Samples: 905642. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:14:41,962][04426] Avg episode reward: [(0, '24.920')] |
|
[2025-08-17 19:14:44,786][04989] Updated weights for policy 0, policy_version 890 (0.0014) |
|
[2025-08-17 19:14:46,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3649536. Throughput: 0: 968.1. Samples: 912066. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:14:46,957][04426] Avg episode reward: [(0, '24.140')] |
|
[2025-08-17 19:14:51,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3665920. Throughput: 0: 956.7. Samples: 916964. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:14:51,961][04426] Avg episode reward: [(0, '23.605')] |
|
[2025-08-17 19:14:55,850][04989] Updated weights for policy 0, policy_version 900 (0.0012) |
|
[2025-08-17 19:14:56,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3690496. Throughput: 0: 969.9. Samples: 920270. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:14:56,961][04426] Avg episode reward: [(0, '24.786')] |
|
[2025-08-17 19:15:01,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3706880. Throughput: 0: 969.2. Samples: 926636. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:15:01,962][04426] Avg episode reward: [(0, '23.883')] |
|
[2025-08-17 19:15:06,956][04426] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3723264. Throughput: 0: 968.0. Samples: 931508. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:15:06,960][04426] Avg episode reward: [(0, '21.882')] |
|
[2025-08-17 19:15:06,970][04989] Updated weights for policy 0, policy_version 910 (0.0013) |
|
[2025-08-17 19:15:11,956][04426] Fps is (10 sec: 4095.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3747840. Throughput: 0: 967.9. Samples: 934604. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:15:11,958][04426] Avg episode reward: [(0, '22.688')] |
|
[2025-08-17 19:15:16,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3764224. Throughput: 0: 955.6. Samples: 940474. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:15:16,963][04426] Avg episode reward: [(0, '23.141')] |
|
[2025-08-17 19:15:18,023][04989] Updated weights for policy 0, policy_version 920 (0.0016) |
|
[2025-08-17 19:15:21,956][04426] Fps is (10 sec: 3277.0, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3780608. Throughput: 0: 961.3. Samples: 945684. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:15:21,961][04426] Avg episode reward: [(0, '22.749')] |
|
[2025-08-17 19:15:26,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3805184. Throughput: 0: 963.6. Samples: 949004. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:15:26,957][04426] Avg episode reward: [(0, '23.841')] |
|
[2025-08-17 19:15:27,695][04989] Updated weights for policy 0, policy_version 930 (0.0016) |
|
[2025-08-17 19:15:31,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3821568. Throughput: 0: 944.6. Samples: 954574. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:15:31,957][04426] Avg episode reward: [(0, '25.452')] |
|
[2025-08-17 19:15:36,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3842048. Throughput: 0: 963.2. Samples: 960308. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-17 19:15:36,957][04426] Avg episode reward: [(0, '25.596')] |
|
[2025-08-17 19:15:38,621][04989] Updated weights for policy 0, policy_version 940 (0.0014) |
|
[2025-08-17 19:15:41,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3862528. Throughput: 0: 963.3. Samples: 963618. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:15:41,957][04426] Avg episode reward: [(0, '25.902')] |
|
[2025-08-17 19:15:46,956][04426] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3878912. Throughput: 0: 937.1. Samples: 968806. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:15:46,957][04426] Avg episode reward: [(0, '26.414')] |
|
[2025-08-17 19:15:46,969][04970] Saving new best policy, reward=26.414! |
|
[2025-08-17 19:15:49,675][04989] Updated weights for policy 0, policy_version 950 (0.0018) |
|
[2025-08-17 19:15:51,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3899392. Throughput: 0: 962.5. Samples: 974822. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-08-17 19:15:51,957][04426] Avg episode reward: [(0, '24.547')] |
|
[2025-08-17 19:15:56,956][04426] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3919872. Throughput: 0: 964.3. Samples: 977998. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2025-08-17 19:15:56,957][04426] Avg episode reward: [(0, '25.758')] |
|
[2025-08-17 19:16:01,166][04989] Updated weights for policy 0, policy_version 960 (0.0014) |
|
[2025-08-17 19:16:01,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3936256. Throughput: 0: 937.8. Samples: 982674. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:16:01,957][04426] Avg episode reward: [(0, '25.699')] |
|
[2025-08-17 19:16:06,956][04426] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3956736. Throughput: 0: 961.5. Samples: 988950. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:16:06,957][04426] Avg episode reward: [(0, '25.562')] |
|
[2025-08-17 19:16:10,936][04989] Updated weights for policy 0, policy_version 970 (0.0012) |
|
[2025-08-17 19:16:11,960][04426] Fps is (10 sec: 3684.7, 60 sec: 3754.4, 300 sec: 3832.2). Total num frames: 3973120. Throughput: 0: 958.7. Samples: 992150. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-08-17 19:16:11,963][04426] Avg episode reward: [(0, '26.125')] |
|
[2025-08-17 19:16:16,956][04426] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 3989504. Throughput: 0: 940.1. Samples: 996878. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-08-17 19:16:16,957][04426] Avg episode reward: [(0, '27.893')] |
|
[2025-08-17 19:16:16,985][04970] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000975_3993600.pth... |
|
[2025-08-17 19:16:17,081][04970] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000750_3072000.pth |
|
[2025-08-17 19:16:17,094][04970] Saving new best policy, reward=27.893! |
|
[2025-08-17 19:16:19,884][04970] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:16:19,892][04970] Stopping Batcher_0... |
|
[2025-08-17 19:16:19,893][04970] Loop batcher_evt_loop terminating... |
|
[2025-08-17 19:16:19,892][04426] Component Batcher_0 stopped! |
|
[2025-08-17 19:16:19,895][04426] Component RolloutWorker_w1 process died already! Don't wait for it. |
|
[2025-08-17 19:16:19,897][04426] Component RolloutWorker_w2 process died already! Don't wait for it. |
|
[2025-08-17 19:16:19,898][04426] Component RolloutWorker_w3 process died already! Don't wait for it. |
|
[2025-08-17 19:16:19,959][04989] Weights refcount: 2 0 |
|
[2025-08-17 19:16:19,964][04426] Component InferenceWorker_p0-w0 stopped! |
|
[2025-08-17 19:16:19,965][04989] Stopping InferenceWorker_p0-w0... |
|
[2025-08-17 19:16:19,966][04989] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-08-17 19:16:19,992][04970] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000863_3534848.pth |
|
[2025-08-17 19:16:20,003][04970] Saving new best policy, reward=28.767! |
|
[2025-08-17 19:16:20,092][04970] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:16:20,226][04970] Stopping LearnerWorker_p0... |
|
[2025-08-17 19:16:20,226][04426] Component LearnerWorker_p0 stopped! |
|
[2025-08-17 19:16:20,227][04970] Loop learner_proc0_evt_loop terminating... |
|
[2025-08-17 19:16:20,256][04426] Component RolloutWorker_w4 stopped! |
|
[2025-08-17 19:16:20,257][04990] Stopping RolloutWorker_w4... |
|
[2025-08-17 19:16:20,260][04990] Loop rollout_proc4_evt_loop terminating... |
|
[2025-08-17 19:16:20,273][04426] Component RolloutWorker_w6 stopped! |
|
[2025-08-17 19:16:20,276][04994] Stopping RolloutWorker_w6... |
|
[2025-08-17 19:16:20,279][04426] Component RolloutWorker_w0 stopped! |
|
[2025-08-17 19:16:20,278][04994] Loop rollout_proc6_evt_loop terminating... |
|
[2025-08-17 19:16:20,282][04987] Stopping RolloutWorker_w0... |
|
[2025-08-17 19:16:20,284][04987] Loop rollout_proc0_evt_loop terminating... |
|
[2025-08-17 19:16:20,368][04992] Stopping RolloutWorker_w5... |
|
[2025-08-17 19:16:20,368][04426] Component RolloutWorker_w5 stopped! |
|
[2025-08-17 19:16:20,369][04992] Loop rollout_proc5_evt_loop terminating... |
|
[2025-08-17 19:16:20,398][04995] Stopping RolloutWorker_w7... |
|
[2025-08-17 19:16:20,397][04426] Component RolloutWorker_w7 stopped! |
|
[2025-08-17 19:16:20,399][04426] Waiting for process learner_proc0 to stop... |
|
[2025-08-17 19:16:20,406][04995] Loop rollout_proc7_evt_loop terminating... |
|
[2025-08-17 19:16:21,707][04426] Waiting for process inference_proc0-0 to join... |
|
[2025-08-17 19:16:21,709][04426] Waiting for process rollout_proc0 to join... |
|
[2025-08-17 19:16:22,969][04426] Waiting for process rollout_proc1 to join... |
|
[2025-08-17 19:16:22,970][04426] Waiting for process rollout_proc2 to join... |
|
[2025-08-17 19:16:22,972][04426] Waiting for process rollout_proc3 to join... |
|
[2025-08-17 19:16:22,973][04426] Waiting for process rollout_proc4 to join... |
|
[2025-08-17 19:16:22,974][04426] Waiting for process rollout_proc5 to join... |
|
[2025-08-17 19:16:22,975][04426] Waiting for process rollout_proc6 to join... |
|
[2025-08-17 19:16:22,976][04426] Waiting for process rollout_proc7 to join... |
|
[2025-08-17 19:16:22,977][04426] Batcher 0 profile tree view: |
|
batching: 22.6165, releasing_batches: 0.0273 |
|
[2025-08-17 19:16:22,978][04426] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 416.8125 |
|
update_model: 9.1201 |
|
weight_update: 0.0013 |
|
one_step: 0.0082 |
|
handle_policy_step: 597.7493 |
|
deserialize: 14.3963, stack: 3.6418, obs_to_device_normalize: 133.7622, forward: 315.3305, send_messages: 22.5603 |
|
prepare_outputs: 82.6509 |
|
to_cpu: 51.6987 |
|
[2025-08-17 19:16:22,982][04426] Learner 0 profile tree view: |
|
misc: 0.0037, prepare_batch: 12.1209 |
|
train: 67.7893 |
|
epoch_init: 0.0045, minibatch_init: 0.0065, losses_postprocess: 0.6287, kl_divergence: 0.5606, after_optimizer: 31.8801 |
|
calculate_losses: 23.2232 |
|
losses_init: 0.0036, forward_head: 1.3065, bptt_initial: 15.7988, tail: 0.8777, advantages_returns: 0.2356, losses: 3.0755 |
|
bptt: 1.7118 |
|
bptt_forward_core: 1.6235 |
|
update: 10.9804 |
|
clip: 0.9500 |
|
[2025-08-17 19:16:22,983][04426] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.3626, enqueue_policy_requests: 95.5663, env_step: 835.7856, overhead: 15.5851, complete_rollouts: 8.4802 |
|
save_policy_outputs: 23.2627 |
|
split_output_tensors: 8.8770 |
|
[2025-08-17 19:16:22,983][04426] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.3259, enqueue_policy_requests: 197.1562, env_step: 753.9508, overhead: 14.9198, complete_rollouts: 5.6116 |
|
save_policy_outputs: 20.7346 |
|
split_output_tensors: 7.8943 |
|
[2025-08-17 19:16:22,984][04426] Loop Runner_EvtLoop terminating... |
|
[2025-08-17 19:16:22,985][04426] Runner profile tree view: |
|
main_loop: 1082.8242 |
|
[2025-08-17 19:16:22,986][04426] Collected {0: 4005888}, FPS: 3699.5 |
|
[2025-08-17 19:16:42,730][04426] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 19:16:42,731][04426] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-17 19:16:42,732][04426] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-17 19:16:42,733][04426] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-17 19:16:42,733][04426] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:16:42,734][04426] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-17 19:16:42,735][04426] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:16:42,736][04426] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-17 19:16:42,737][04426] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-17 19:16:42,738][04426] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-17 19:16:42,740][04426] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-17 19:16:42,741][04426] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-17 19:16:42,742][04426] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-17 19:16:42,743][04426] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-17 19:16:42,744][04426] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-17 19:16:42,773][04426] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-17 19:16:42,776][04426] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 19:16:42,778][04426] RunningMeanStd input shape: (1,) |
|
[2025-08-17 19:16:42,793][04426] ConvEncoder: input_channels=3 |
|
[2025-08-17 19:16:42,891][04426] Conv encoder output size: 512 |
|
[2025-08-17 19:16:42,892][04426] Policy head output size: 512 |
|
[2025-08-17 19:16:43,060][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:16:43,063][04426] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:16:43,066][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:16:43,067][04426] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:16:43,069][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:16:43,070][04426] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:16:48,453][04426] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 19:16:48,456][04426] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-17 19:16:48,458][04426] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-17 19:16:48,459][04426] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-17 19:16:48,461][04426] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:16:48,462][04426] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-17 19:16:48,465][04426] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:16:48,466][04426] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-17 19:16:48,467][04426] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-17 19:16:48,468][04426] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-17 19:16:48,469][04426] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-17 19:16:48,470][04426] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-17 19:16:48,472][04426] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-17 19:16:48,473][04426] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-17 19:16:48,475][04426] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-17 19:16:48,521][04426] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 19:16:48,522][04426] RunningMeanStd input shape: (1,) |
|
[2025-08-17 19:16:48,533][04426] ConvEncoder: input_channels=3 |
|
[2025-08-17 19:16:48,567][04426] Conv encoder output size: 512 |
|
[2025-08-17 19:16:48,568][04426] Policy head output size: 512 |
|
[2025-08-17 19:16:48,586][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:16:48,587][04426] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:16:48,589][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:16:48,591][04426] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:16:48,592][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:16:48,593][04426] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:17:43,733][04426] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 19:17:43,734][04426] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-17 19:17:43,734][04426] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-17 19:17:43,735][04426] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-17 19:17:43,736][04426] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:17:43,737][04426] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-17 19:17:43,738][04426] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-08-17 19:17:43,739][04426] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-17 19:17:43,740][04426] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-08-17 19:17:43,741][04426] Adding new argument 'hf_repository'='LizardAPN/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-08-17 19:17:43,741][04426] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-17 19:17:43,742][04426] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-17 19:17:43,743][04426] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-17 19:17:43,744][04426] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-17 19:17:43,745][04426] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-17 19:17:43,769][04426] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 19:17:43,771][04426] RunningMeanStd input shape: (1,) |
|
[2025-08-17 19:17:43,781][04426] ConvEncoder: input_channels=3 |
|
[2025-08-17 19:17:43,815][04426] Conv encoder output size: 512 |
|
[2025-08-17 19:17:43,816][04426] Policy head output size: 512 |
|
[2025-08-17 19:17:43,837][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:17:43,838][04426] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:17:43,840][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:17:43,842][04426] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:17:43,843][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:17:43,845][04426] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:18:01,192][04426] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 19:18:01,193][04426] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-17 19:18:01,194][04426] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-17 19:18:01,195][04426] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-17 19:18:01,196][04426] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:18:01,197][04426] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-17 19:18:01,198][04426] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:18:01,199][04426] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-17 19:18:01,199][04426] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-17 19:18:01,200][04426] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-17 19:18:01,201][04426] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-17 19:18:01,202][04426] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-17 19:18:01,203][04426] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-17 19:18:01,204][04426] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-17 19:18:01,204][04426] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-17 19:18:01,231][04426] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 19:18:01,232][04426] RunningMeanStd input shape: (1,) |
|
[2025-08-17 19:18:01,242][04426] ConvEncoder: input_channels=3 |
|
[2025-08-17 19:18:01,278][04426] Conv encoder output size: 512 |
|
[2025-08-17 19:18:01,279][04426] Policy head output size: 512 |
|
[2025-08-17 19:18:01,298][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:18:01,300][04426] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:18:01,301][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:18:01,303][04426] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:18:01,304][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:18:01,306][04426] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:19:04,270][04426] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 19:19:04,271][04426] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-17 19:19:04,272][04426] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-17 19:19:04,273][04426] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-17 19:19:04,274][04426] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:19:04,276][04426] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-17 19:19:04,277][04426] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:19:04,278][04426] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-17 19:19:04,279][04426] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-17 19:19:04,279][04426] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-17 19:19:04,280][04426] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-17 19:19:04,281][04426] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-17 19:19:04,282][04426] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-17 19:19:04,284][04426] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-17 19:19:04,285][04426] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-17 19:19:04,327][04426] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 19:19:04,329][04426] RunningMeanStd input shape: (1,) |
|
[2025-08-17 19:19:04,346][04426] ConvEncoder: input_channels=3 |
|
[2025-08-17 19:19:04,399][04426] Conv encoder output size: 512 |
|
[2025-08-17 19:19:04,400][04426] Policy head output size: 512 |
|
[2025-08-17 19:19:04,429][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:19:04,430][04426] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
TypeError: __main__.<lambda>() got multiple values for keyword argument 'weights_only' |
|
[2025-08-17 19:19:04,432][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:19:04,433][04426] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
TypeError: __main__.<lambda>() got multiple values for keyword argument 'weights_only' |
|
[2025-08-17 19:19:04,434][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:19:04,435][04426] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
TypeError: __main__.<lambda>() got multiple values for keyword argument 'weights_only' |
|
[2025-08-17 19:20:24,340][04426] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 19:20:24,341][04426] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-17 19:20:24,342][04426] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-17 19:20:24,343][04426] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-17 19:20:24,344][04426] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:20:24,345][04426] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-17 19:20:24,346][04426] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:20:24,347][04426] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-17 19:20:24,348][04426] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-17 19:20:24,349][04426] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-17 19:20:24,350][04426] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-17 19:20:24,352][04426] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-17 19:20:24,354][04426] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-17 19:20:24,355][04426] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-17 19:20:24,356][04426] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-17 19:20:24,386][04426] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 19:20:24,387][04426] RunningMeanStd input shape: (1,) |
|
[2025-08-17 19:20:24,399][04426] ConvEncoder: input_channels=3 |
|
[2025-08-17 19:20:24,459][04426] Conv encoder output size: 512 |
|
[2025-08-17 19:20:24,462][04426] Policy head output size: 512 |
|
[2025-08-17 19:20:24,493][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:20:24,494][04426] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
TypeError: __main__.<lambda>() got multiple values for keyword argument 'weights_only' |
|
[2025-08-17 19:20:24,496][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:20:24,497][04426] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
TypeError: __main__.<lambda>() got multiple values for keyword argument 'weights_only' |
|
[2025-08-17 19:20:24,498][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:20:24,499][04426] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
TypeError: __main__.<lambda>() got multiple values for keyword argument 'weights_only' |
|
[2025-08-17 19:21:01,185][04426] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 19:21:01,186][04426] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-17 19:21:01,187][04426] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-17 19:21:01,189][04426] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-17 19:21:01,190][04426] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:21:01,192][04426] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-17 19:21:01,193][04426] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:21:01,194][04426] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-17 19:21:01,195][04426] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-17 19:21:01,198][04426] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-17 19:21:01,199][04426] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-17 19:21:01,200][04426] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-17 19:21:01,201][04426] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-17 19:21:01,202][04426] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-17 19:21:01,203][04426] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-17 19:21:01,246][04426] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 19:21:01,248][04426] RunningMeanStd input shape: (1,) |
|
[2025-08-17 19:21:01,259][04426] ConvEncoder: input_channels=3 |
|
[2025-08-17 19:21:01,294][04426] Conv encoder output size: 512 |
|
[2025-08-17 19:21:01,295][04426] Policy head output size: 512 |
|
[2025-08-17 19:21:01,315][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:21:01,316][04426] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-4068497191.py", line 7, in <lambda> |
|
torch.load = lambda f, *args, **kwargs: original_load(f, *args, **{**kwargs, 'weights_only': False}) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
TypeError: __main__.<lambda>() got multiple values for keyword argument 'weights_only' |
|
[2025-08-17 19:21:01,318][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:21:01,319][04426] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-4068497191.py", line 7, in <lambda> |
|
torch.load = lambda f, *args, **kwargs: original_load(f, *args, **{**kwargs, 'weights_only': False}) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
TypeError: __main__.<lambda>() got multiple values for keyword argument 'weights_only' |
|
[2025-08-17 19:21:01,321][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:21:01,321][04426] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-4068497191.py", line 7, in <lambda> |
|
torch.load = lambda f, *args, **kwargs: original_load(f, *args, **{**kwargs, 'weights_only': False}) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/tmp/ipython-input-755043023.py", line 2, in <lambda> |
|
torch.load = lambda *args, **kwargs: torch.load(*args, **kwargs, weights_only=False) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
TypeError: __main__.<lambda>() got multiple values for keyword argument 'weights_only' |
|
[2025-08-17 19:23:19,592][04426] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 19:23:19,593][04426] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-17 19:23:19,594][04426] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-17 19:23:19,595][04426] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-17 19:23:19,596][04426] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:23:19,597][04426] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-17 19:23:19,599][04426] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:23:19,600][04426] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-17 19:23:19,601][04426] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-17 19:23:19,602][04426] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-17 19:23:19,603][04426] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-17 19:23:19,604][04426] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-17 19:23:19,605][04426] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-17 19:23:19,606][04426] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-17 19:23:19,607][04426] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-17 19:23:19,636][04426] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 19:23:19,638][04426] RunningMeanStd input shape: (1,) |
|
[2025-08-17 19:23:19,650][04426] ConvEncoder: input_channels=3 |
|
[2025-08-17 19:23:19,685][04426] Conv encoder output size: 512 |
|
[2025-08-17 19:23:19,687][04426] Policy head output size: 512 |
|
[2025-08-17 19:23:19,719][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:23:20,471][04426] Num frames 100... |
|
[2025-08-17 19:23:20,608][04426] Num frames 200... |
|
[2025-08-17 19:23:20,755][04426] Num frames 300... |
|
[2025-08-17 19:23:20,894][04426] Num frames 400... |
|
[2025-08-17 19:23:21,029][04426] Num frames 500... |
|
[2025-08-17 19:23:21,168][04426] Num frames 600... |
|
[2025-08-17 19:23:21,308][04426] Num frames 700... |
|
[2025-08-17 19:23:21,444][04426] Num frames 800... |
|
[2025-08-17 19:23:21,582][04426] Num frames 900... |
|
[2025-08-17 19:23:21,721][04426] Num frames 1000... |
|
[2025-08-17 19:23:21,871][04426] Num frames 1100... |
|
[2025-08-17 19:23:22,009][04426] Num frames 1200... |
|
[2025-08-17 19:23:22,151][04426] Num frames 1300... |
|
[2025-08-17 19:23:22,226][04426] Avg episode rewards: #0: 29.120, true rewards: #0: 13.120 |
|
[2025-08-17 19:23:22,227][04426] Avg episode reward: 29.120, avg true_objective: 13.120 |
|
[2025-08-17 19:23:22,351][04426] Num frames 1400... |
|
[2025-08-17 19:23:22,488][04426] Num frames 1500... |
|
[2025-08-17 19:23:22,626][04426] Num frames 1600... |
|
[2025-08-17 19:23:22,765][04426] Num frames 1700... |
|
[2025-08-17 19:23:22,916][04426] Num frames 1800... |
|
[2025-08-17 19:23:23,058][04426] Num frames 1900... |
|
[2025-08-17 19:23:23,201][04426] Num frames 2000... |
|
[2025-08-17 19:23:23,294][04426] Avg episode rewards: #0: 24.130, true rewards: #0: 10.130 |
|
[2025-08-17 19:23:23,295][04426] Avg episode reward: 24.130, avg true_objective: 10.130 |
|
[2025-08-17 19:23:23,398][04426] Num frames 2100... |
|
[2025-08-17 19:23:23,535][04426] Num frames 2200... |
|
[2025-08-17 19:23:23,670][04426] Num frames 2300... |
|
[2025-08-17 19:23:23,810][04426] Num frames 2400... |
|
[2025-08-17 19:23:23,961][04426] Num frames 2500... |
|
[2025-08-17 19:23:24,102][04426] Num frames 2600... |
|
[2025-08-17 19:23:24,241][04426] Num frames 2700... |
|
[2025-08-17 19:23:24,383][04426] Num frames 2800... |
|
[2025-08-17 19:23:24,521][04426] Num frames 2900... |
|
[2025-08-17 19:23:24,663][04426] Num frames 3000... |
|
[2025-08-17 19:23:24,745][04426] Avg episode rewards: #0: 23.060, true rewards: #0: 10.060 |
|
[2025-08-17 19:23:24,746][04426] Avg episode reward: 23.060, avg true_objective: 10.060 |
|
[2025-08-17 19:23:24,859][04426] Num frames 3100... |
|
[2025-08-17 19:23:25,010][04426] Num frames 3200... |
|
[2025-08-17 19:23:25,146][04426] Num frames 3300... |
|
[2025-08-17 19:23:25,288][04426] Num frames 3400... |
|
[2025-08-17 19:23:25,421][04426] Num frames 3500... |
|
[2025-08-17 19:23:25,557][04426] Num frames 3600... |
|
[2025-08-17 19:23:25,717][04426] Avg episode rewards: #0: 20.190, true rewards: #0: 9.190 |
|
[2025-08-17 19:23:25,718][04426] Avg episode reward: 20.190, avg true_objective: 9.190 |
|
[2025-08-17 19:23:25,755][04426] Num frames 3700... |
|
[2025-08-17 19:23:25,895][04426] Num frames 3800... |
|
[2025-08-17 19:23:26,048][04426] Num frames 3900... |
|
[2025-08-17 19:23:26,189][04426] Num frames 4000... |
|
[2025-08-17 19:23:26,330][04426] Num frames 4100... |
|
[2025-08-17 19:23:26,479][04426] Num frames 4200... |
|
[2025-08-17 19:23:26,618][04426] Num frames 4300... |
|
[2025-08-17 19:23:26,757][04426] Num frames 4400... |
|
[2025-08-17 19:23:26,898][04426] Num frames 4500... |
|
[2025-08-17 19:23:26,965][04426] Avg episode rewards: #0: 19.816, true rewards: #0: 9.016 |
|
[2025-08-17 19:23:26,968][04426] Avg episode reward: 19.816, avg true_objective: 9.016 |
|
[2025-08-17 19:23:27,135][04426] Num frames 4600... |
|
[2025-08-17 19:23:27,336][04426] Num frames 4700... |
|
[2025-08-17 19:23:27,523][04426] Num frames 4800... |
|
[2025-08-17 19:23:27,694][04426] Avg episode rewards: #0: 17.600, true rewards: #0: 8.100 |
|
[2025-08-17 19:23:27,697][04426] Avg episode reward: 17.600, avg true_objective: 8.100 |
|
[2025-08-17 19:23:27,774][04426] Num frames 4900... |
|
[2025-08-17 19:23:27,962][04426] Num frames 5000... |
|
[2025-08-17 19:23:28,163][04426] Num frames 5100... |
|
[2025-08-17 19:23:28,342][04426] Num frames 5200... |
|
[2025-08-17 19:23:28,529][04426] Num frames 5300... |
|
[2025-08-17 19:23:28,728][04426] Num frames 5400... |
|
[2025-08-17 19:23:28,922][04426] Num frames 5500... |
|
[2025-08-17 19:23:29,120][04426] Num frames 5600... |
|
[2025-08-17 19:23:29,309][04426] Num frames 5700... |
|
[2025-08-17 19:23:29,443][04426] Num frames 5800... |
|
[2025-08-17 19:23:29,580][04426] Num frames 5900... |
|
[2025-08-17 19:23:29,718][04426] Num frames 6000... |
|
[2025-08-17 19:23:29,855][04426] Num frames 6100... |
|
[2025-08-17 19:23:29,994][04426] Num frames 6200... |
|
[2025-08-17 19:23:30,136][04426] Num frames 6300... |
|
[2025-08-17 19:23:30,297][04426] Avg episode rewards: #0: 21.091, true rewards: #0: 9.091 |
|
[2025-08-17 19:23:30,298][04426] Avg episode reward: 21.091, avg true_objective: 9.091 |
|
[2025-08-17 19:23:30,352][04426] Num frames 6400... |
|
[2025-08-17 19:23:30,493][04426] Num frames 6500... |
|
[2025-08-17 19:23:30,634][04426] Num frames 6600... |
|
[2025-08-17 19:23:30,774][04426] Num frames 6700... |
|
[2025-08-17 19:23:30,915][04426] Num frames 6800... |
|
[2025-08-17 19:23:31,056][04426] Num frames 6900... |
|
[2025-08-17 19:23:31,196][04426] Num frames 7000... |
|
[2025-08-17 19:23:31,342][04426] Num frames 7100... |
|
[2025-08-17 19:23:31,480][04426] Num frames 7200... |
|
[2025-08-17 19:23:31,615][04426] Num frames 7300... |
|
[2025-08-17 19:23:31,753][04426] Num frames 7400... |
|
[2025-08-17 19:23:31,894][04426] Num frames 7500... |
|
[2025-08-17 19:23:32,035][04426] Num frames 7600... |
|
[2025-08-17 19:23:32,108][04426] Avg episode rewards: #0: 22.140, true rewards: #0: 9.515 |
|
[2025-08-17 19:23:32,109][04426] Avg episode reward: 22.140, avg true_objective: 9.515 |
|
[2025-08-17 19:23:32,236][04426] Num frames 7700... |
|
[2025-08-17 19:23:32,385][04426] Num frames 7800... |
|
[2025-08-17 19:23:32,525][04426] Num frames 7900... |
|
[2025-08-17 19:23:32,664][04426] Num frames 8000... |
|
[2025-08-17 19:23:32,803][04426] Num frames 8100... |
|
[2025-08-17 19:23:32,941][04426] Num frames 8200... |
|
[2025-08-17 19:23:33,080][04426] Num frames 8300... |
|
[2025-08-17 19:23:33,221][04426] Num frames 8400... |
|
[2025-08-17 19:23:33,386][04426] Num frames 8500... |
|
[2025-08-17 19:23:33,457][04426] Avg episode rewards: #0: 22.009, true rewards: #0: 9.453 |
|
[2025-08-17 19:23:33,458][04426] Avg episode reward: 22.009, avg true_objective: 9.453 |
|
[2025-08-17 19:23:33,583][04426] Num frames 8600... |
|
[2025-08-17 19:23:33,720][04426] Num frames 8700... |
|
[2025-08-17 19:23:33,855][04426] Num frames 8800... |
|
[2025-08-17 19:23:33,989][04426] Num frames 8900... |
|
[2025-08-17 19:23:34,127][04426] Num frames 9000... |
|
[2025-08-17 19:23:34,268][04426] Num frames 9100... |
|
[2025-08-17 19:23:34,421][04426] Num frames 9200... |
|
[2025-08-17 19:23:34,559][04426] Num frames 9300... |
|
[2025-08-17 19:23:34,697][04426] Num frames 9400... |
|
[2025-08-17 19:23:34,837][04426] Num frames 9500... |
|
[2025-08-17 19:23:34,980][04426] Avg episode rewards: #0: 21.964, true rewards: #0: 9.564 |
|
[2025-08-17 19:23:34,981][04426] Avg episode reward: 21.964, avg true_objective: 9.564 |
|
[2025-08-17 19:24:36,435][04426] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-08-17 19:25:36,596][04426] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 19:25:36,597][04426] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-17 19:25:36,598][04426] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-17 19:25:36,599][04426] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-17 19:25:36,600][04426] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:25:36,600][04426] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-17 19:25:36,601][04426] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-08-17 19:25:36,602][04426] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-17 19:25:36,604][04426] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-08-17 19:25:36,605][04426] Adding new argument 'hf_repository'='LizardAPN/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-08-17 19:25:36,605][04426] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-17 19:25:36,606][04426] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-17 19:25:36,607][04426] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-17 19:25:36,608][04426] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-17 19:25:36,609][04426] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-17 19:25:36,633][04426] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 19:25:36,634][04426] RunningMeanStd input shape: (1,) |
|
[2025-08-17 19:25:36,647][04426] ConvEncoder: input_channels=3 |
|
[2025-08-17 19:25:36,680][04426] Conv encoder output size: 512 |
|
[2025-08-17 19:25:36,681][04426] Policy head output size: 512 |
|
[2025-08-17 19:25:36,698][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:25:36,700][04426] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.dtype was not an allowed global by default. Please use `torch.serialization.add_safe_globals([dtype])` or the `torch.serialization.safe_globals([dtype])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:25:36,701][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:25:36,703][04426] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.dtype was not an allowed global by default. Please use `torch.serialization.add_safe_globals([dtype])` or the `torch.serialization.safe_globals([dtype])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:25:36,705][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:25:36,707][04426] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.11/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.dtype was not an allowed global by default. Please use `torch.serialization.add_safe_globals([dtype])` or the `torch.serialization.safe_globals([dtype])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-17 19:26:45,334][04426] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-08-17 19:26:45,335][04426] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-17 19:26:45,336][04426] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-17 19:26:45,337][04426] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-17 19:26:45,338][04426] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-17 19:26:45,339][04426] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-17 19:26:45,340][04426] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-08-17 19:26:45,341][04426] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-17 19:26:45,341][04426] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-08-17 19:26:45,342][04426] Adding new argument 'hf_repository'='LizardAPN/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-08-17 19:26:45,343][04426] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-17 19:26:45,344][04426] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-17 19:26:45,345][04426] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-17 19:26:45,346][04426] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-17 19:26:45,347][04426] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-17 19:26:45,370][04426] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-17 19:26:45,371][04426] RunningMeanStd input shape: (1,) |
|
[2025-08-17 19:26:45,384][04426] ConvEncoder: input_channels=3 |
|
[2025-08-17 19:26:45,417][04426] Conv encoder output size: 512 |
|
[2025-08-17 19:26:45,419][04426] Policy head output size: 512 |
|
[2025-08-17 19:26:45,437][04426] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-17 19:26:45,898][04426] Num frames 100... |
|
[2025-08-17 19:26:46,055][04426] Num frames 200... |
|
[2025-08-17 19:26:46,195][04426] Num frames 300... |
|
[2025-08-17 19:26:46,337][04426] Num frames 400... |
|
[2025-08-17 19:26:46,485][04426] Num frames 500... |
|
[2025-08-17 19:26:46,627][04426] Num frames 600... |
|
[2025-08-17 19:26:46,771][04426] Num frames 700... |
|
[2025-08-17 19:26:46,909][04426] Num frames 800... |
|
[2025-08-17 19:26:47,008][04426] Avg episode rewards: #0: 15.320, true rewards: #0: 8.320 |
|
[2025-08-17 19:26:47,009][04426] Avg episode reward: 15.320, avg true_objective: 8.320 |
|
[2025-08-17 19:26:47,101][04426] Num frames 900... |
|
[2025-08-17 19:26:47,237][04426] Num frames 1000... |
|
[2025-08-17 19:26:47,396][04426] Num frames 1100... |
|
[2025-08-17 19:26:47,536][04426] Num frames 1200... |
|
[2025-08-17 19:26:47,674][04426] Num frames 1300... |
|
[2025-08-17 19:26:47,816][04426] Num frames 1400... |
|
[2025-08-17 19:26:47,952][04426] Num frames 1500... |
|
[2025-08-17 19:26:48,102][04426] Num frames 1600... |
|
[2025-08-17 19:26:48,236][04426] Num frames 1700... |
|
[2025-08-17 19:26:48,382][04426] Num frames 1800... |
|
[2025-08-17 19:26:48,518][04426] Num frames 1900... |
|
[2025-08-17 19:26:48,698][04426] Num frames 2000... |
|
[2025-08-17 19:26:48,886][04426] Num frames 2100... |
|
[2025-08-17 19:26:49,082][04426] Num frames 2200... |
|
[2025-08-17 19:26:49,216][04426] Avg episode rewards: #0: 23.700, true rewards: #0: 11.200 |
|
[2025-08-17 19:26:49,217][04426] Avg episode reward: 23.700, avg true_objective: 11.200 |
|
[2025-08-17 19:26:49,328][04426] Num frames 2300... |
|
[2025-08-17 19:26:49,507][04426] Num frames 2400... |
|
[2025-08-17 19:26:49,691][04426] Num frames 2500... |
|
[2025-08-17 19:26:49,868][04426] Num frames 2600... |
|
[2025-08-17 19:26:50,059][04426] Num frames 2700... |
|
[2025-08-17 19:26:50,272][04426] Num frames 2800... |
|
[2025-08-17 19:26:50,460][04426] Num frames 2900... |
|
[2025-08-17 19:26:50,654][04426] Num frames 3000... |
|
[2025-08-17 19:26:50,729][04426] Avg episode rewards: #0: 20.694, true rewards: #0: 10.027 |
|
[2025-08-17 19:26:50,730][04426] Avg episode reward: 20.694, avg true_objective: 10.027 |
|
[2025-08-17 19:26:50,858][04426] Num frames 3100... |
|
[2025-08-17 19:26:50,994][04426] Num frames 3200... |
|
[2025-08-17 19:26:51,130][04426] Num frames 3300... |
|
[2025-08-17 19:26:51,284][04426] Num frames 3400... |
|
[2025-08-17 19:26:51,424][04426] Num frames 3500... |
|
[2025-08-17 19:26:51,562][04426] Num frames 3600... |
|
[2025-08-17 19:26:51,699][04426] Num frames 3700... |
|
[2025-08-17 19:26:51,837][04426] Num frames 3800... |
|
[2025-08-17 19:26:51,973][04426] Num frames 3900... |
|
[2025-08-17 19:26:52,110][04426] Num frames 4000... |
|
[2025-08-17 19:26:52,265][04426] Num frames 4100... |
|
[2025-08-17 19:26:52,403][04426] Num frames 4200... |
|
[2025-08-17 19:26:52,542][04426] Num frames 4300... |
|
[2025-08-17 19:26:52,682][04426] Num frames 4400... |
|
[2025-08-17 19:26:52,831][04426] Num frames 4500... |
|
[2025-08-17 19:26:52,979][04426] Num frames 4600... |
|
[2025-08-17 19:26:53,131][04426] Num frames 4700... |
|
[2025-08-17 19:26:53,279][04426] Num frames 4800... |
|
[2025-08-17 19:26:53,420][04426] Num frames 4900... |
|
[2025-08-17 19:26:53,522][04426] Avg episode rewards: #0: 28.832, true rewards: #0: 12.332 |
|
[2025-08-17 19:26:53,523][04426] Avg episode reward: 28.832, avg true_objective: 12.332 |
|
[2025-08-17 19:26:53,614][04426] Num frames 5000... |
|
[2025-08-17 19:26:53,748][04426] Num frames 5100... |
|
[2025-08-17 19:26:53,885][04426] Num frames 5200... |
|
[2025-08-17 19:26:54,021][04426] Num frames 5300... |
|
[2025-08-17 19:26:54,159][04426] Num frames 5400... |
|
[2025-08-17 19:26:54,305][04426] Num frames 5500... |
|
[2025-08-17 19:26:54,383][04426] Avg episode rewards: #0: 25.018, true rewards: #0: 11.018 |
|
[2025-08-17 19:26:54,384][04426] Avg episode reward: 25.018, avg true_objective: 11.018 |
|
[2025-08-17 19:26:54,508][04426] Num frames 5600... |
|
[2025-08-17 19:26:54,642][04426] Num frames 5700... |
|
[2025-08-17 19:26:54,777][04426] Num frames 5800... |
|
[2025-08-17 19:26:54,913][04426] Num frames 5900... |
|
[2025-08-17 19:26:55,049][04426] Num frames 6000... |
|
[2025-08-17 19:26:55,187][04426] Num frames 6100... |
|
[2025-08-17 19:26:55,331][04426] Num frames 6200... |
|
[2025-08-17 19:26:55,482][04426] Num frames 6300... |
|
[2025-08-17 19:26:55,619][04426] Num frames 6400... |
|
[2025-08-17 19:26:55,756][04426] Num frames 6500... |
|
[2025-08-17 19:26:55,897][04426] Num frames 6600... |
|
[2025-08-17 19:26:56,032][04426] Num frames 6700... |
|
[2025-08-17 19:26:56,209][04426] Avg episode rewards: #0: 25.815, true rewards: #0: 11.315 |
|
[2025-08-17 19:26:56,210][04426] Avg episode reward: 25.815, avg true_objective: 11.315 |
|
[2025-08-17 19:26:56,229][04426] Num frames 6800... |
|
[2025-08-17 19:26:56,364][04426] Num frames 6900... |
|
[2025-08-17 19:26:56,509][04426] Num frames 7000... |
|
[2025-08-17 19:26:56,645][04426] Num frames 7100... |
|
[2025-08-17 19:26:56,777][04426] Num frames 7200... |
|
[2025-08-17 19:26:56,915][04426] Num frames 7300... |
|
[2025-08-17 19:26:57,051][04426] Num frames 7400... |
|
[2025-08-17 19:26:57,189][04426] Num frames 7500... |
|
[2025-08-17 19:26:57,328][04426] Num frames 7600... |
|
[2025-08-17 19:26:57,477][04426] Num frames 7700... |
|
[2025-08-17 19:26:57,614][04426] Num frames 7800... |
|
[2025-08-17 19:26:57,750][04426] Num frames 7900... |
|
[2025-08-17 19:26:57,899][04426] Num frames 8000... |
|
[2025-08-17 19:26:57,998][04426] Avg episode rewards: #0: 26.187, true rewards: #0: 11.473 |
|
[2025-08-17 19:26:57,999][04426] Avg episode reward: 26.187, avg true_objective: 11.473 |
|
[2025-08-17 19:26:58,096][04426] Num frames 8100... |
|
[2025-08-17 19:26:58,235][04426] Num frames 8200... |
|
[2025-08-17 19:26:58,369][04426] Num frames 8300... |
|
[2025-08-17 19:26:58,515][04426] Num frames 8400... |
|
[2025-08-17 19:26:58,653][04426] Num frames 8500... |
|
[2025-08-17 19:26:58,787][04426] Num frames 8600... |
|
[2025-08-17 19:26:58,922][04426] Num frames 8700... |
|
[2025-08-17 19:26:59,057][04426] Num frames 8800... |
|
[2025-08-17 19:26:59,198][04426] Num frames 8900... |
|
[2025-08-17 19:26:59,338][04426] Num frames 9000... |
|
[2025-08-17 19:26:59,475][04426] Num frames 9100... |
|
[2025-08-17 19:26:59,622][04426] Num frames 9200... |
|
[2025-08-17 19:26:59,759][04426] Num frames 9300... |
|
[2025-08-17 19:26:59,898][04426] Num frames 9400... |
|
[2025-08-17 19:27:00,041][04426] Num frames 9500... |
|
[2025-08-17 19:27:00,181][04426] Num frames 9600... |
|
[2025-08-17 19:27:00,322][04426] Num frames 9700... |
|
[2025-08-17 19:27:00,458][04426] Num frames 9800... |
|
[2025-08-17 19:27:00,606][04426] Num frames 9900... |
|
[2025-08-17 19:27:00,754][04426] Num frames 10000... |
|
[2025-08-17 19:27:00,950][04426] Num frames 10100... |
|
[2025-08-17 19:27:01,064][04426] Avg episode rewards: #0: 29.914, true rewards: #0: 12.664 |
|
[2025-08-17 19:27:01,065][04426] Avg episode reward: 29.914, avg true_objective: 12.664 |
|
[2025-08-17 19:27:01,204][04426] Num frames 10200... |
|
[2025-08-17 19:27:01,389][04426] Num frames 10300... |
|
[2025-08-17 19:27:01,575][04426] Num frames 10400... |
|
[2025-08-17 19:27:01,767][04426] Num frames 10500... |
|
[2025-08-17 19:27:01,955][04426] Num frames 10600... |
|
[2025-08-17 19:27:02,145][04426] Num frames 10700... |
|
[2025-08-17 19:27:02,339][04426] Num frames 10800... |
|
[2025-08-17 19:27:02,527][04426] Num frames 10900... |
|
[2025-08-17 19:27:02,735][04426] Num frames 11000... |
|
[2025-08-17 19:27:02,930][04426] Num frames 11100... |
|
[2025-08-17 19:27:03,038][04426] Avg episode rewards: #0: 29.267, true rewards: #0: 12.378 |
|
[2025-08-17 19:27:03,039][04426] Avg episode reward: 29.267, avg true_objective: 12.378 |
|
[2025-08-17 19:27:03,125][04426] Num frames 11200... |
|
[2025-08-17 19:27:03,263][04426] Num frames 11300... |
|
[2025-08-17 19:27:03,399][04426] Num frames 11400... |
|
[2025-08-17 19:27:03,534][04426] Num frames 11500... |
|
[2025-08-17 19:27:03,673][04426] Num frames 11600... |
|
[2025-08-17 19:27:03,816][04426] Num frames 11700... |
|
[2025-08-17 19:27:03,954][04426] Num frames 11800... |
|
[2025-08-17 19:27:04,093][04426] Num frames 11900... |
|
[2025-08-17 19:27:04,230][04426] Num frames 12000... |
|
[2025-08-17 19:27:04,371][04426] Num frames 12100... |
|
[2025-08-17 19:27:04,507][04426] Num frames 12200... |
|
[2025-08-17 19:27:04,648][04426] Num frames 12300... |
|
[2025-08-17 19:27:04,798][04426] Num frames 12400... |
|
[2025-08-17 19:27:04,967][04426] Avg episode rewards: #0: 29.484, true rewards: #0: 12.484 |
|
[2025-08-17 19:27:04,968][04426] Avg episode reward: 29.484, avg true_objective: 12.484 |
|
[2025-08-17 19:28:23,914][04426] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|