2025-05-15 00:01:35 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1 distributed training: True, 16-bits training: False 2025-05-15 00:01:39 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1 distributed training: True, 16-bits training: False 2025-05-15 00:01:40 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1 distributed training: True, 16-bits training: False 2025-05-15 00:01:40 - INFO - __main__ - Model parameters ModelArguments(base_model_revision=None, model_name_or_path='/gscratch/weirdlab/mateogc/projects/vlm-navigation/data/paligemma2-3b-pt-224-sft-lora-magicsoup_no_cfiphone_no_insta_sub5', model_revision='main', model_code_revision=None, torch_dtype='bfloat16', tokenizer_name_or_path=None, processor_name_or_path=None, trust_remote_code=False, attn_implementation='eager', use_peft=True, lora_r=16, lora_alpha=16, lora_dropout=0.05, lora_target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj'], lora_modules_to_save=None, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False, bnb_4bit_quant_storage='uint8') 2025-05-15 00:01:40 - INFO - __main__ - Data parameters DataArguments(chat_template=None, dataset_mixer={'mateoguaman/gates_lobby_every1_100pct_sub5': 1.0}, eval_dataset_mixer={'mateoguaman/gates_lobby_every1_100pct_sub5': 1.0}, text_column='text', dataset_splits=['train', 'test'], train_splits=['train'], validation_splits=['validation'], processing_params={}, dataset_configs=None, preprocessing_num_workers=12, truncation_side=None, auto_insert_empty_system_msg=True, auto_set_chat_template=False, cache_dataset_only=False, just_combine_data=False, output_dataset_name=None, output_dataset_description=None, hf_entity=None) 2025-05-15 00:01:40 - INFO - __main__ - Training/evaluation parameters SFTConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, bf16=True, bf16_full_eval=False, chars_per_token=, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=0.25, eval_strategy=IntervalStrategy.STEPS, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=False, gradient_checkpointing_kwargs=None, greater_is_better=False, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=True, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5/runs/May15_00-01-39_boss, logging_first_step=True, logging_nan_inf_filter=True, logging_steps=20, logging_strategy=IntervalStrategy.STEPS, lr_scheduler_kwargs={}, lr_scheduler_type=SchedulerType.COSINE, max_grad_norm=1.0, max_length=2048, max_seq_length=2048, max_steps=-1, metric_for_best_model=loss, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=1, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, overwrite_output_dir=True, packing=True, padding_free=False, past_index=-1, per_device_eval_batch_size=2, per_device_train_batch_size=2, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=0.25, save_strategy=SaveStrategy.STEPS, save_total_limit=None, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=None, use_liger_kernel=False, use_mps_device=False, warmup_ratio=0.1, warmup_steps=0, weight_decay=1e-05, ) 2025-05-15 00:03:05 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1 distributed training: True, 16-bits training: False 2025-05-15 00:03:06 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1 distributed training: True, 16-bits training: False 2025-05-15 00:03:06 - INFO - __main__ - Model parameters ModelArguments(base_model_revision=None, model_name_or_path='mateoguaman/paligemma2-3b-pt-224-sft-lora-magicsoup_no_cfiphone_no_insta_sub5', model_revision='main', model_code_revision=None, torch_dtype='bfloat16', tokenizer_name_or_path=None, processor_name_or_path=None, trust_remote_code=False, attn_implementation='eager', use_peft=True, lora_r=16, lora_alpha=16, lora_dropout=0.05, lora_target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj'], lora_modules_to_save=None, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False, bnb_4bit_quant_storage='uint8') 2025-05-15 00:03:06 - INFO - __main__ - Data parameters DataArguments(chat_template=None, dataset_mixer={'mateoguaman/gates_lobby_every1_100pct_sub5': 1.0}, eval_dataset_mixer={'mateoguaman/gates_lobby_every1_100pct_sub5': 1.0}, text_column='text', dataset_splits=['train', 'test'], train_splits=['train'], validation_splits=['validation'], processing_params={}, dataset_configs=None, preprocessing_num_workers=12, truncation_side=None, auto_insert_empty_system_msg=True, auto_set_chat_template=False, cache_dataset_only=False, just_combine_data=False, output_dataset_name=None, output_dataset_description=None, hf_entity=None) 2025-05-15 00:03:06 - INFO - __main__ - Training/evaluation parameters SFTConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, bf16=True, bf16_full_eval=False, chars_per_token=, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=0.25, eval_strategy=IntervalStrategy.STEPS, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=False, gradient_checkpointing_kwargs=None, greater_is_better=False, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=True, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5/runs/May15_00-03-05_boss, logging_first_step=True, logging_nan_inf_filter=True, logging_steps=20, logging_strategy=IntervalStrategy.STEPS, lr_scheduler_kwargs={}, lr_scheduler_type=SchedulerType.COSINE, max_grad_norm=1.0, max_length=2048, max_seq_length=2048, max_steps=-1, metric_for_best_model=loss, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=1, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, overwrite_output_dir=True, packing=True, padding_free=False, past_index=-1, per_device_eval_batch_size=2, per_device_train_batch_size=2, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=0.25, save_strategy=SaveStrategy.STEPS, save_total_limit=None, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=None, use_liger_kernel=False, use_mps_device=False, warmup_ratio=0.1, warmup_steps=0, weight_decay=1e-05, ) 2025-05-15 00:03:14 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1 distributed training: True, 16-bits training: False 2025-05-15 00:03:14 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1 distributed training: True, 16-bits training: False 2025-05-15 00:03:14 - INFO - __main__ - Model parameters ModelArguments(base_model_revision=None, model_name_or_path='mateoguaman/paligemma2-3b-pt-224-sft-lora-magicsoup_no_cfiphone_no_insta_sub5', model_revision='main', model_code_revision=None, torch_dtype='bfloat16', tokenizer_name_or_path=None, processor_name_or_path=None, trust_remote_code=False, attn_implementation='eager', use_peft=True, lora_r=16, lora_alpha=16, lora_dropout=0.05, lora_target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj'], lora_modules_to_save=None, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False, bnb_4bit_quant_storage='uint8') 2025-05-15 00:03:14 - INFO - __main__ - Data parameters DataArguments(chat_template=None, dataset_mixer={'mateoguaman/gates_lobby_every1_100pct_sub5': 1.0}, eval_dataset_mixer={'mateoguaman/gates_lobby_every1_100pct_sub5': 1.0}, text_column='text', dataset_splits=['train', 'test'], train_splits=['train'], validation_splits=['validation'], processing_params={}, dataset_configs=None, preprocessing_num_workers=12, truncation_side=None, auto_insert_empty_system_msg=True, auto_set_chat_template=False, cache_dataset_only=False, just_combine_data=False, output_dataset_name=None, output_dataset_description=None, hf_entity=None) 2025-05-15 00:03:14 - INFO - __main__ - Training/evaluation parameters SFTConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, bf16=True, bf16_full_eval=False, chars_per_token=, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=0.25, eval_strategy=IntervalStrategy.STEPS, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=False, gradient_checkpointing_kwargs=None, greater_is_better=False, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=True, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5/runs/May15_00-03-14_boss, logging_first_step=True, logging_nan_inf_filter=True, logging_steps=20, logging_strategy=IntervalStrategy.STEPS, lr_scheduler_kwargs={}, lr_scheduler_type=SchedulerType.COSINE, max_grad_norm=1.0, max_length=2048, max_seq_length=2048, max_steps=-1, metric_for_best_model=loss, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=1, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, overwrite_output_dir=True, packing=True, padding_free=False, past_index=-1, per_device_eval_batch_size=2, per_device_train_batch_size=2, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=0.25, save_strategy=SaveStrategy.STEPS, save_total_limit=None, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=None, use_liger_kernel=False, use_mps_device=False, warmup_ratio=0.1, warmup_steps=0, weight_decay=1e-05, ) 2025-05-15 00:03:59 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1 distributed training: True, 16-bits training: False 2025-05-15 00:04:00 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1 distributed training: True, 16-bits training: False 2025-05-15 00:04:00 - INFO - __main__ - Model parameters ModelArguments(base_model_revision=None, model_name_or_path='mateoguaman/paligemma2-3b-pt-224-sft-lora-magicsoup_no_cfiphone_no_insta_sub5', model_revision='main', model_code_revision=None, torch_dtype='bfloat16', tokenizer_name_or_path=None, processor_name_or_path=None, trust_remote_code=False, attn_implementation='eager', use_peft=True, lora_r=16, lora_alpha=16, lora_dropout=0.05, lora_target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj'], lora_modules_to_save=None, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False, bnb_4bit_quant_storage='uint8') 2025-05-15 00:04:00 - INFO - __main__ - Data parameters DataArguments(chat_template=None, dataset_mixer={'mateoguaman/gates_lobby_every1_100pct_sub5': 1.0}, eval_dataset_mixer={'mateoguaman/gates_lobby_every1_100pct_sub5': 1.0}, text_column='text', dataset_splits=['train', 'test'], train_splits=['train'], validation_splits=['validation'], processing_params={}, dataset_configs=None, preprocessing_num_workers=12, truncation_side=None, auto_insert_empty_system_msg=True, auto_set_chat_template=False, cache_dataset_only=False, just_combine_data=False, output_dataset_name=None, output_dataset_description=None, hf_entity=None) 2025-05-15 00:04:00 - INFO - __main__ - Training/evaluation parameters SFTConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, bf16=True, bf16_full_eval=False, chars_per_token=, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=0.25, eval_strategy=IntervalStrategy.STEPS, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=False, gradient_checkpointing_kwargs=None, greater_is_better=False, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=True, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5/runs/May15_00-03-59_boss, logging_first_step=True, logging_nan_inf_filter=True, logging_steps=20, logging_strategy=IntervalStrategy.STEPS, lr_scheduler_kwargs={}, lr_scheduler_type=SchedulerType.COSINE, max_grad_norm=1.0, max_length=2048, max_seq_length=2048, max_steps=-1, metric_for_best_model=loss, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=1, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, overwrite_output_dir=True, packing=True, padding_free=False, past_index=-1, per_device_eval_batch_size=2, per_device_train_batch_size=2, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=0.25, save_strategy=SaveStrategy.STEPS, save_total_limit=None, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=None, use_liger_kernel=False, use_mps_device=False, warmup_ratio=0.1, warmup_steps=0, weight_decay=1e-05, ) 2025-05-15 00:04:20 - INFO - datasets.builder - Found cached dataset gates_lobby_every1_100pct_sub5 (/home/mateo/.cache/huggingface/datasets/mateoguaman___gates_lobby_every1_100pct_sub5/default/0.0.0/9a6a00b50ee73d2de0f547f9be30121295aa84c5) 2025-05-15 00:04:20 - INFO - datasets.info - Loading Dataset info from /home/mateo/.cache/huggingface/datasets/mateoguaman___gates_lobby_every1_100pct_sub5/default/0.0.0/9a6a00b50ee73d2de0f547f9be30121295aa84c5 2025-05-15 00:04:20 - INFO - datasets.arrow_dataset - Caching indices mapping at /tmp/hf_datasets-2sj88w25/cache-fb366b4cd978d064.arrow 2025-05-15 00:04:20 - INFO - datasets.arrow_dataset - Caching indices mapping at /tmp/hf_datasets-2sj88w25/cache-05ac2d643e2e786e.arrow 2025-05-15 00:04:21 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists. 2025-05-15 00:04:21 - INFO - datasets.info - Loading Dataset info from /home/mateo/.cache/huggingface/datasets/mateoguaman___gates_lobby_every1_100pct_sub5/default/0.0.0/9a6a00b50ee73d2de0f547f9be30121295aa84c5 2025-05-15 00:04:21 - INFO - datasets.builder - Found cached dataset gates_lobby_every1_100pct_sub5 (/home/mateo/.cache/huggingface/datasets/mateoguaman___gates_lobby_every1_100pct_sub5/default/0.0.0/9a6a00b50ee73d2de0f547f9be30121295aa84c5) 2025-05-15 00:04:21 - INFO - datasets.info - Loading Dataset info from /home/mateo/.cache/huggingface/datasets/mateoguaman___gates_lobby_every1_100pct_sub5/default/0.0.0/9a6a00b50ee73d2de0f547f9be30121295aa84c5 2025-05-15 00:04:21 - INFO - datasets.arrow_dataset - Caching indices mapping at /tmp/hf_datasets-2sj88w25/cache-f3388d65013fcc8f.arrow 2025-05-15 00:04:21 - INFO - datasets.arrow_dataset - Caching indices mapping at /tmp/hf_datasets-2sj88w25/cache-830ea2c03cd06d98.arrow 2025-05-15 00:04:21 - INFO - __main__ - *** Load pretrained model *** 2025-05-15 00:04:24 - INFO - peft.tuners.tuners_utils - Already found a `peft_config` attribute in the model. This will lead to having multiple adapters in the model. Make sure to know what you are doing! 2025-05-15 00:04:24 - INFO - peft.tuners.tuners_utils - Already found a `peft_config` attribute in the model. This will lead to having multiple adapters in the model. Make sure to know what you are doing! 2025-05-15 00:04:24 - INFO - __main__ - Model has these adapters: dict_keys(['default']) 2025-05-15 00:04:24 - INFO - __main__ - Sample 1309 of the processed training set: Navigate to x=, y= 2025-05-15 00:04:24 - INFO - __main__ - Sample 228 of the processed training set: Navigate to x=, y= 2025-05-15 00:04:24 - INFO - __main__ - Sample 51 of the processed training set: Navigate to x=, y= 2025-05-15 00:04:25 - INFO - __main__ - Loaded model is already a PeftModel. Continuing LoRA training without creating a new PEFT config. 2025-05-15 00:04:28 - INFO - __main__ - *** Train *** 2025-05-15 00:10:57 - INFO - __main__ - *** Save model *** 2025-05-15 00:10:58 - INFO - __main__ - Model saved to data/paligemma2-3b-pt-224-sft-lora-iphone_gates_lobby_small_ft_sub5 2025-05-15 00:10:58 - INFO - __main__ - *** Evaluate *** 2025-05-15 00:12:11 - INFO - __main__ - *** Training complete ***