For Axolotl

#3
by martossien - opened

An error with colonne "texte" when Axolotl "wait text"
so i don't find a good mapping
i make a python script to change it :

-- coding: utf-8 --

from datasets import load_dataset

Charger le dataset original

print("Chargement du dataset original...")
dataset = load_dataset("louisbrulenaudet/code-securite-sociale")

Afficher la structure du dataset pour vérification

print("Structure du dataset original:")
print(dataset['train'].features)

Renommer la colonne 'texte' en 'text' pour Axolotl

print("Renommage de la colonne 'texte' en 'text'...")

Correction ici : dataset est un DatasetDict, il faut d'abord accéder au split 'train'

dataset['train'] = dataset['train'].rename_column("texte", "text")

Vérifier que la colonne a bien été renommée

print("Structure du dataset après renommage:")
print(dataset['train'].features)

Sauvegarder le dataset modifié au format Parquet à la racine du projet

print("Sauvegarde du dataset modifié au format Parquet...")
dataset['train'].to_parquet("./css_dataset.parquet")

print("Terminé! Le dataset modifié est disponible dans le fichier './css_dataset.parquet'")
print("Utilisez ce chemin dans votre configuration Axolotl:")
print("""
datasets:

  • path: ./css_dataset.parquet
    type: completion
    ds_type: parquet
    """)

and i put a yml conf file who works with me after change colonne name "texte" to "text", Mistral Small 24B with Qlora and FSDP ( trouble with Deepspeed ZeRo 3 with my hardware 7 RTX 3090 )

base_model: mistralai/Mistral-Small-24B-Instruct-2501
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

trust_remote_code: true

load_in_8bit: false
load_in_4bit: true
strict: false

Configuration du dataset avec spécification des colonnes

datasets:

  • path: ./css_dataset.parquet
    type: completion

field_text: texte

input_field: texte

columns:

prompt: texte # Supposons que la colonne contenant le texte s'appelle "texte"

dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./outputs/mistral-24b-css-parquet

adapter: qlora
lora_model_dir:

sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false

lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_target_modules:

  • q_proj
  • v_proj
  • k_proj
  • o_proj
  • gate_proj
  • up_proj
  • down_proj

gradient_accumulation_steps: 128
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16: false
tf32: false

gradient_checkpointing: false
logging_steps: 1
flash_attention: true

warmup_steps: 50
saves_per_epoch: 1
weight_decay: 0.01

fsdp:

  • full_shard
  • auto_wrap
    fsdp_config:
    fsdp_limit_all_gathers: true
    fsdp_sync_module_states: true
    fsdp_offload_params: false
    fsdp_use_orig_params: true
    fsdp_cpu_ram_efficient_loading: true
    fsdp_transformer_layer_cls_to_wrap: MistralDecoderLayer
    fsdp_state_dict_type: SHARDED_STATE_DICT
    fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
    fsdp_sharding_strategy: FULL_SHARD
    fsdp_backward_prefetch: BACKWARD_PRE
    fsdp_activation_checkpointing: true(base)

i must change this ( crash AssertionError when backup checkpoints ) :

fsdp_save_optimizer_state: false # Désactive la sauvegarde de l'optimiseur

New conf version who works without crash :
base_model: mistralai/Mistral-Small-24B-Instruct-2501
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
load_in_8bit: false
load_in_4bit: true
strict: false

datasets:

  • path: ./css_dataset.parquet
    type: completion

dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./outputs/mistral-24b-css-parquet-run4

adapter: qlora
lora_model_dir:

sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false

lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_target_modules:

  • q_proj
  • v_proj
  • k_proj
  • o_proj
  • gate_proj
  • up_proj
  • down_proj

gradient_accumulation_steps: 128
micro_batch_size: 1
num_epochs: 3
optimizer: adamw_hf
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16: false
tf32: false

gradient_checkpointing: false
logging_steps: 1
flash_attention: true

warmup_steps: 50
saves_per_epoch: 1
weight_decay: 0.01

Configuration de sauvegarde

save_strategy: "steps"
save_optimizer: false
save_safetensors: true

fsdp:

  • full_shard
  • auto_wrap
    fsdp_config:
    fsdp_limit_all_gathers: true
    fsdp_sync_module_states: true
    fsdp_offload_params: true
    fsdp_state_dict_type: FULL_STATE_DICT
    fsdp_use_orig_params: false
    fsdp_cpu_ram_efficient_loading: true
    fsdp_transformer_layer_cls_to_wrap: MistralDecoderLayer
    fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
    fsdp_sharding_strategy: FULL_SHARD
    fsdp_backward_prefetch: BACKWARD_PRE
    fsdp_activation_checkpointing: true
    fsdp_save_optimizer_state: false

Sign up or log in to comment