PEFT
Safetensors
mistral
axolotl
Generated from Trainer
4-bit precision
bitsandbytes

Built with Axolotl

See axolotl config

axolotl version: 0.11.0.dev0

# === Model Configuration ===
base_model: inflatebot/MN-12B-Mag-Mell-R1
load_in_8bit: false
load_in_4bit: true

# === HF Configuration === 
hub_model_id: ToastyPigeon/nemo-kimi-lora-2e
hub_strategy: "checkpoint"

# === Training Setup ===
num_epochs: 2
micro_batch_size: 1
gradient_accumulation_steps: 2
sequence_len: 32768
sequence_parallel_degree: 2
heads_k_stride: 1
sample_packing: true
pad_to_sequence_len: false
#max_steps: 10
# === Evaluation ===
val_set_size: 0.01
evals_per_epoch: 10
#eval_steps: 20
#max_steps: 60
#eval_table_size:
eval_max_new_tokens: 128
eval_sample_packing: true
#eval_strategy: "no"

# === LoRA Configuration ===
adapter: qlora
lora_model_dir:
lora_r: 32
lora_alpha: 32
lora_dropout: 0.1
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
peft_use_rslora: false
lora_modules_to_save:
#  - embed_tokens
#  - lm_head
#fix_untrained_tokens: true
#lora_mlp_kernel: true
#lora_qkv_kernel: true
#lora_o_kernel: true

# === Hyperparameter Configuration ===
#optimizer: apollo_adamw_layerwise
warmup_steps: 0
optimizer: adamw_torch_fused
#optimizer: paged_adamw_8bit
#optim_args:
#  enable_stochastic_rounding: true
#  enable_cautious: true
#  enable_8bit: true
# Apollo-mini configuration:
#optim_args: "proj=random,rank=128,scale=128.0,scale_type=tensor,update_proj_gap=100"
# Regular Apollo configuration:
# optim_args: 
#optim_target_modules: all_linear
learning_rate: 5e-6
lr_scheduler: cosine
#cosine_min_lr_ratio: 0.2
#lr_scheduler: cosine_with_min_lr
#lr_scheduler_kwargs:
#  cosine_min_lr: 1e-6
weight_decay: 0.01
max_grad_norm: 1.0
#warmup_steps: 0
#warmup_ratio: 0.025


# === Data Configuration ===
#chat_template: jinja
#chat_template_jinja: "{%- set default_system_message = \"You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. You obediently fulfill the user's requests.\" %}\n\n{{- bos_token }}\n\n{%- if messages[0]['role'] == 'system' %}\n    {%- if messages[0]['content'] is string %}\n        {%- set system_message = messages[0]['content'] %}\n    {%- else %}\n        {%- set system_message = messages[0]['content'][0]['text'] %}\n    {%- endif %}\n    {%- set loop_messages = messages[1:] %}\n{%- else %}\n    {%- set system_message = default_system_message %}\n    {%- set loop_messages = messages %}\n{%- endif %}\n{{- '[SYSTEM_PROMPT]' + system_message + '[/SYSTEM_PROMPT]' }}\n\n{%- for message in loop_messages %}\n    {%- if message['role'] == 'user' %}\n        {%- if message['content'] is string %}\n            {{- '[INST]' + message['content'] + '[/INST]' }}\n        {%- else %}\n            {{- '[INST]' }}\n            {%- for bl (line truncated to 1000 characters)
#chat_template: chatml
special_tokens:
  pad_token: "<pad>"

#tokenizer_use_mistral_common: true
shuffle_merged_datasets: true
datasets:
  - path: ToastyPigeon/steve-and-marvin
    type: completion
    data_files: marvin.json
  - path: ToastyPigeon/kimi-stories-completion
    type: completion
  - path: Alfitaria/bodinforg-completions
    type: completion
dataset_prepared_path: last_run_prepared


# === Plugins ===
plugins:
  - axolotl.integrations.liger.LigerPlugin
  - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin

# === Hardware Optimization ===
#gradient_checkpointing: offload
#gradient_checkpointing_kwargs:
#  use_reentrant: false
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
#liger_fused_linear_cross_entropy: true
cut_cross_entropy: true

#deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json

# === FSDP Config === 
fsdp:
  - full_shard
  - auto_wrap
fsdp_config:
  fsdp_limit_all_gathers: true
  fsdp_sync_module_states: true
  fsdp_offload_params: true
  fsdp_activation_checkpointing: true
  fsdp_use_orig_params: false
  fsdp_cpu_ram_efficient_loading: true
  fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
  fsdp_transformer_layer_cls_to_wrap: MistralDecoderLayer
  fsdp_state_dict_type: FULL_STATE_DICT
  fsdp_sharding_strategy: FULL_SHARD
# === Wandb Tracking ===
wandb_project: Nemo
# wandb_entity: [WANDB_ENTITY]
# wandb_name: [WANDB_RUN_NAME]

# === Checkpointing ===
saves_per_epoch: 10
save_total_limit: 1

# === Advanced Settings ===
output_dir: /workspace/aibox-standalone-pool/axolotl/nemo-writer-ckpts-2e
bf16: auto
flash_attention: true
train_on_inputs: false
group_by_length: false
save_safetensors: true
logging_steps: 1
gc_steps: 10
seed: 69




nemo-kimi-lora-2e

This model is a fine-tuned version of inflatebot/MN-12B-Mag-Mell-R1 on the ToastyPigeon/steve-and-marvin, the ToastyPigeon/kimi-stories-completion and the Alfitaria/bodinforg-completions datasets. It achieves the following results on the evaluation set:

  • Loss: 2.5237

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 69
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • total_eval_batch_size: 2
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 29
  • training_steps: 984

Training results

Training Loss Epoch Step Validation Loss
No log 0 0 2.9604
2.6808 0.1016 50 2.7132
3.1442 0.2033 100 2.6132
2.8972 0.3049 150 2.5786
2.4404 0.4065 200 2.5598
2.5215 0.5081 250 2.5512
2.5145 0.6098 300 2.5456
2.5293 0.7114 350 2.5412
2.5439 0.8130 400 2.5380
2.2925 0.9146 450 2.5342
2.4822 1.0163 500 2.5326
2.382 1.1179 550 2.5299
2.6777 1.2195 600 2.5282
2.5493 1.3211 650 2.5264
2.5682 1.4228 700 2.5257
2.4425 1.5244 750 2.5248
2.5204 1.6260 800 2.5243
2.5435 1.7276 850 2.5239
2.8078 1.8293 900 2.5237
2.8416 1.9309 950 2.5237

Framework versions

  • PEFT 0.15.2
  • Transformers 4.52.4
  • Pytorch 2.7.0+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
41
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ToastyPigeon/nemo-kimi-lora-2e

Adapter
(5)
this model

Datasets used to train ToastyPigeon/nemo-kimi-lora-2e