base_model: Heralax/test-model-4-pretrain
tokenizer_type: AutoTokenizer
model_type: AutoModelForCausalLM
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: axolotl_rag_conversations_facts.jsonl
type: input_output
- path: axolotl_correction_conversations_facts.json
type: input_output
- path: pretraining_subset_2170418.jsonl
type: completion
- path: factual_sft_completion/combined_all_0.jsonl
type: completion
- path: factual_sft_completion/combined_all_1.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl
type: completion
dataset_prepared_path: last_finetune_prepared
output_dir: ./finetune-model-output
seed: 1337
sequence_len: 5000
sample_packing: true
pad_to_sequence_len: false
shuffle_merged_datasets: true
gradient_accumulation_steps: 75
micro_batch_size: 2
eval_batch_size: 4
num_epochs: 5
optimizer: paged_adamw_8bit
lr_scheduler: constant
learning_rate: 2.0e-05
noisy_embedding_alpha: 5
weight_decay: 0
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
logging_steps: 1
xformers_attention: false
flash_attention: true
chat_template: chatml
auto_resume_from_checkpoints: false
warmup_ratio: 0.1
evals_per_epoch: 1
val_set_size: 0.04
saves_per_epoch: 1
eval_sample_packing: false
save_total_limit: 2
special_tokens:
pad_token: <unk>
use_liger_kernel: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
sequence_length: 10000
wandb_project: test-project
wandb_entity: ''
wandb_watch: ''
wandb_run_id: ''
wandb_log_model: ''
hub_model_id: Heralax/test-model-4-sft
hub_strategy: all_checkpoints
llama-Augmentoolkit-Quickstart-Factual-Demo-Example
This model is achieves the following results on the evaluation set:
- Loss: 0.6876
(See? Number go down. Augmentoolkit works.)
This is a demo model produced by running through the quickstart of Augmentoolkit's Factual Finetuning pipeline. The model was taught about some of the US Army Field Manuals.
The following manuals were trained on:
ARN14613_FM 1-05 FINAL WEB.pdf.txt ARN19639_FM 3-14 FINAL WEB.pdf.txt ARN31505-FM_3-96-000-WEB-1.pdf.txt ARN34470-FM_6-99-000-WEB-1.pdf.txt ARN35577-FM_3-55-000-WEB-0.pdf.txt
ARN15310-FM_3-13.4-000-WEB-2.pdf.txt ARN21797_FM_3-04_FINAL_WEB_wfix.pdf.txt ARN33094-FM_3-57-000-WEB-1.pdf.txt ARN34770-FM_3-94-000-WEB-1.pdf.txt ARN35791-FM_4-02-001-WEB-3.pdf.txt
ARN17082-FM_3-11-000-WEB-1.pdf.txt ARN30964-FM_7-22-001-WEB-4.pdf.txt ARN33127-FM_3-12-000-WEB-1.pdf.txt ARN34864-FM_3-61-000-WEB-1.pdf.txt ARN35838-FM_3-01.44-000-WEB-1.pdf.txt
ARN19185_FM 6-02_FINAL_WEB.pdf.txt ARN31339-FM_3-01-000-WEB-1.pdf.txt ARN33331-FM_1-0-000-WEB-1.pdf.txt ARN35076-FM_7-0-000-WEB-1.pdf.txt ARN36290-FM_3-0-000-WEB-2.pdf.txt
ARN19354_FM 6-27 _C1_FINAL_WEB_v2.pdf.txt ARN31353-FM_3-34-000-WEB-1.pdf.txt ARN34192-FM_3-81-000-WEB-1.pdf.txt ARN35404-FM_6-0-000-WEB-1.pdf.txt ARN36735-FM_6-22-000-WEB-1.pdf.txt
The prompt.txt
, template.txt
, RAG dataset, and GGUF file are all inside this folder so that people can run this model themselves using Augmentoolkit's chat interface. Just download the things not in the checkpoint-xx/ folders (not the model.safetensors files), put them all in a folder, and configure the basic-server or rag-server config to point at the prompt, template, etc., (see the documentation pages for those utility pipelines) and bang, Augmentoolkit will run these models with the correct prompt template and configuration.
Stop sequence == "**Finished.**"
Why did I do it like that? Because the more SFT text resembles the pretraining text, the more that knowledge and capabilities from the pretraining will carry over to the SFT. Convention and chatml be damned, I like better performance.
Related Links:
Q: Why the Llama license?
A: The quickstart uses Llama 3 to generate the data for the sake of speed and hardware compatibility. Therefore, the Llama license applies to this demo model.
Example (no RAG btw):
- Downloads last month
- 12
Model tree for Heralax/llama-Augmentoolkit-Quickstart-Factual-Demo-Example
Base model
Heralax/test-model-4-pretrain