# Model: LLaMA (IFD Top 30% + Low Instruction Entropy) ## πŸ” Purpose Fine-tune `meta-llama/Llama-3.2-1B` on instructions with **low entropy** among the high-IFD group. These instructions tend to be **short, simple, and repetitive**. ## πŸ“‚ Dataset - `alpaca2000_entropy_low.csv` - From `alpaca2000.csv` (IFD μƒμœ„ 30%) - instruction entropy ν•˜μœ„ 30% μΆ”μΆœ (μ•½ 180개) ## βš™οΈ Training Config - Model: `meta-llama/Llama-3.2-1B` - Precision: `bf16` or `float32` - Epochs: 3 - Max length: 2048 - Output: `output/llama_entropy_low` ## πŸ§ͺ Goal Measure whether **simpler instructions** lead to better model learning by reducing prompt complexity.