YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

en_sw - GGUF

Name Quant method Size
en_sw.Q2_K.gguf Q2_K 2.96GB
en_sw.IQ3_XS.gguf IQ3_XS 3.28GB
en_sw.IQ3_S.gguf IQ3_S 3.43GB
en_sw.Q3_K_S.gguf Q3_K_S 3.41GB
en_sw.IQ3_M.gguf IQ3_M 3.52GB
en_sw.Q3_K.gguf Q3_K 3.74GB
en_sw.Q3_K_M.gguf Q3_K_M 3.74GB
en_sw.Q3_K_L.gguf Q3_K_L 4.03GB
en_sw.IQ4_XS.gguf IQ4_XS 4.18GB
en_sw.Q4_0.gguf Q4_0 4.34GB
en_sw.IQ4_NL.gguf IQ4_NL 4.38GB
en_sw.Q4_K_S.gguf Q4_K_S 4.37GB
en_sw.Q4_K.gguf Q4_K 4.58GB
en_sw.Q4_K_M.gguf Q4_K_M 4.58GB
en_sw.Q4_1.gguf Q4_1 4.78GB
en_sw.Q5_0.gguf Q5_0 5.21GB
en_sw.Q5_K_S.gguf Q5_K_S 5.21GB
en_sw.Q5_K.gguf Q5_K 5.34GB
en_sw.Q5_K_M.gguf Q5_K_M 5.34GB
en_sw.Q5_1.gguf Q5_1 5.65GB
en_sw.Q6_K.gguf Q6_K 6.14GB
en_sw.Q8_0.gguf Q8_0 7.95GB

Original model description:

library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B-Instruct tags: - trl - sft - generated_from_trainer datasets: - generator model-index: - name: en_sw results: []

en_sw

This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the generator dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 16
  • optimizer: Use adafactor and the args are: No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.20.3
Downloads last month
250
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support