You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Built with Axolotl

See axolotl config

axolotl version: 0.10.0.dev0

# === Start-up Commands ===
# curl -LsSf https://astral.sh/uv/install.sh | sh
# export PATH="$HOME/.local/bin:$PATH"
# uv venv
# source .venv/bin/activate
# git clone https://github.com/axolotl-ai-cloud/axolotl
# cd axolotl
# uv pip install torch==2.5.1 packaging ninja setuptools ftfy deepspeed huggingface_hub[cli,hf_transfer]
# uv pip install "cut-cross-entropy[transformers] @ git+https://github.com/strangedove/ml-cross-entropy.git@gemma3-multimodal"
# uv pip install apollo-torch
# uv pip install --no-build-isolation -e .[flash-attn]
# uv pip install git+https://github.com/huggingface/transformers.git
# uv pip install git+https://github.com/linkedin/Liger-Kernel.git
# export HF_HUB_ENABLE_HF_TRANSFER=1
# huggingface-cli login --token $hf_key && wandb login $wandb_key

# apt update && apt install -y libopenmpi-dev && curl -LsSf https://astral.sh/uv/install.sh | sh && export PATH="$HOME/.local/bin:$PATH" && git clone https://github.com/axolotl-ai-cloud/axolotl && uv venv && source .venv/bin/activate && cd axolotl && uv pip install torch==2.5.1 packaging ninja mpi4py setuptools ftfy deepspeed huggingface_hub[cli,hf_transfer] && uv pip install apollo-torch && uv pip install "cut-cross-entropy[transformers] @ git+https://github.com/strangedove/ml-cross-entropy.git@qwen3" && uv pip install git+https://github.com/linkedin/Liger-Kernel.git && uv pip install --no-build-isolation -e .[flash-attn] && uv pip install git+https://github.com/huggingface/transformers.git && export HF_HUB_ENABLE_HF_TRANSFER=1 && cd .. && huggingface-cli login --token $hf_key && wandb login $wandb_key

# === Model Configuration ===
base_model: Columbidae/Qwen3-16B-A3B-Base
load_in_8bit: false
load_in_4bit: false

# === HF Configuration === 
hub_model_id: Columbidae/Qwen3-16B-A3B-Tulu-Mini
hub_strategy: "every_save"

# === Training Setup ===
num_epochs: 1
micro_batch_size: 4
gradient_accumulation_steps: 1
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true

# === Evaluation ===
val_set_size: 1000
evals_per_epoch: 5
#eval_steps: 20
#max_steps: 60
#eval_table_size:
eval_max_new_tokens: 256
eval_sample_packing: true
#eval_strategy: "no"

# === LoRA Configuration ===
#adapter: lora
#lora_model_dir:
#lora_r: 32
#lora_alpha: 32
#lora_dropout: 0
#lora_target_linear: 
#lora_fan_in_fan_out:
#lora_target_modules:
#  - gate_proj
#  - down_proj
#  - up_proj
#  - q_proj
#  - v_proj
#  - k_proj
#  - o_proj

#lora_mlp_kernel: true
#lora_qkv_kernel: true
#lora_o_kernel: true

# === Hyperparameter Configuration ===
optimizer: apollo_adamw_layerwise
#optimizer: paged_adamw_8bit
# Apollo-mini configuration:
optim_args: "proj=random,rank=128,scale=128.0,scale_type=tensor,update_proj_gap=100"
# Regular Apollo configuration:
# optim_args: 
optim_target_modules: all_linear
learning_rate: 3e-5
lr_scheduler: cosine
#lr_scheduler: cosine_with_min_lr
#lr_scheduler_kwargs:
#  cosine_min_lr: 1e-6
weight_decay: 0.01
#warmup_steps: 0
warmup_ratio: 0.025


# === Data Configuration ===
#chat_template: jinja
#chat_template_jinja: "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '<start_of_turn>' + role + '\n' + message['content'] | trim + '<end_of_turn>\n' }}{% endfor %}{% if add_generation_prompt %}{{'<start_of_turn>model\n'}}{% endif %}"
#special_tokens:
#  eos_token: "<end_of_turn>"
chat_template: chatml
shuffle_merged_datasets: true
datasets:
  - path: ToastyPigeon/tulu-mini
    type: chat_template
    
dataset_prepared_path: last_run_prepared


# === Plugins ===
plugins:
  - axolotl.integrations.liger.LigerPlugin
  - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin

# === Hardware Optimization ===
gradient_checkpointing: offload
gradient_checkpointing_kwargs:
  use_reentrant: false
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
#liger_fused_linear_cross_entropy: true
#unsloth_cross_entropy_loss: true
cut_cross_entropy: true
# Only if using multiple GPUs:
#deepspeed: axolotl/deepspeed_configs/zero2.json

# === Wandb Tracking ===
wandb_project: Qwen3MoE-Apollo
# wandb_entity: [WANDB_ENTITY]
# wandb_name: [WANDB_RUN_NAME]

# === Checkpointing ===
saves_per_epoch: 4
save_total_limit: 1

# === Advanced Settings ===
output_dir: ./ckpts
bf16: auto
flash_attention: true
train_on_inputs: false
group_by_length: false
save_safetensors: true
logging_steps: 1
gc_steps: 10
seed: 69

Qwen3-16B-A3B-Tulu-Mini

This model is a fine-tuned version of Columbidae/Qwen3-16B-A3B-Base on the ToastyPigeon/tulu-mini dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5759

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 69
  • optimizer: Use OptimizerNames.APOLLO_ADAMW_LAYERWISE with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=proj=random,rank=128,scale=128.0,scale_type=tensor,update_proj_gap=100
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 31
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss
1.2445 0.0008 1 3.0398
0.7152 0.2003 256 2.9816
1.6035 0.4006 512 2.8261
0.999 0.6009 768 2.6930
0.4284 0.8013 1024 2.5759

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.5.1
  • Tokenizers 0.21.1
Downloads last month
5
Safetensors
Model size
16B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Columbidae/Qwen3-16B-A3B-Tulu-Mini

Finetuned
(2)
this model

Dataset used to train Columbidae/Qwen3-16B-A3B-Tulu-Mini