YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Model: LLaMA (IFD Top 30%)

πŸ” Purpose

Fine-tune meta-llama/Llama-3.2-1B on instruction samples with the highest Instruction Flow Density (IFD).
This group includes samples where the instruction contributes least to the model’s output (i.e., high IFD).

πŸ“‚ Dataset

  • alpaca2000.csv
    • IFD score μƒμœ„ 30% (2000개 쀑 600개)
    • κΈ°μ€€: PPL(y | x) / PPL(y) (x: instruction+input, y: output)

βš™οΈ Training Config

  • Model: meta-llama/Llama-3.2-1B
  • Precision: bf16 or float32
  • Epochs: 3
  • Max length: 2048
  • Output: output/llama_ifd

πŸ§ͺ Goal

Establish baseline performance of high-IFD samples, before splitting by instruction entropy.

Downloads last month
14
Safetensors
Model size
1.24B params
Tensor type
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support