Edit model card

Built with Axolotl

See axolotl config

axolotl version: 0.4.0

base_model: Alignment-Lab-AI/Alignment-Lab-AIlonger
load_in_8bit: false
load_in_4bit: false
strict: false
tokenizer_type: LlamaTokenizer

datasets:
  - path: PygmalionAI/spice
    type: sharegpt
    conversation: chatml

  - path: PygmalionAI/NYROS
    type: sharegpt
    conversation: chatml

chat_template: chatml

dataset_prepared_path: /workspace/disk2/2prepath2
val_set_size: 0.05
output_dir: /workspace/disk2/Eros2-b
eval_sample_packing: true
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
torch_compile: true
hf_use_auth_token: true
hub_strategy: all_checkpoints
hub_model_id: PygmalionAI/Eros-BETA
hub_private_repo: true
push_to_hub: true
wandb_project: Erosium-b
wandb_entity:
wandb_watch: all
overwrite_output_dir: true
wandb_name:
wandb_log_model:
save_safetensors: true
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_bnb_8bit
amsgrad: true
max_grad_norm: 1
lr_scheduler: 'cosine'
lr_scheduler_kwargs:
  num_cycles: 6
learning_rate: 0.00005
gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
train_on_inputs: false
group_by_length: true
neftune_noise_alpha: 6
bf16: auto
fp16:
tf32: false
seed: 314159
early_stopping_patience:
local_rank:
logging_steps: 1
log_level: debug
xformers_attention:
flash_attention: true
warmup_steps:
eval_per_epoch: 0.25
save_steps: 0.20
debug:
deepspeed: ./deepspeed_configs/zero2.json
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"
tokens:
  - "<|im_start|>"
  - "<|im_end|>"

Eros-BETA

This model is a fine-tuned version of Alignment-Lab-AI/Alignment-Lab-AIlonger on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1394

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 314159
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 3
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss
1.267 1.02 224 1.3057
1.1657 2.02 448 1.2184
1.062 3.02 672 1.1664
0.8812 3.94 880 1.1394

Framework versions

  • Transformers 4.39.0.dev0
  • Pytorch 2.1.2+cu118
  • Datasets 2.18.0
  • Tokenizers 0.15.0
Downloads last month
9
Safetensors
Model size
131M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tavtav/eros-7B-BETA

Finetuned
(2)
this model
Quantizations
1 model