π§ FlamingNeuron / llama381binstruct_summarize_short_merged
This is a merged model based on NousResearch/Meta-Llama-3.1-8B-Instruct, fine-tuned using LoRA adapters for legal-domain summarization. The LoRA weights have been merged with the base model for standalone use.
π Task
This model converts legalese into short, human-readable summaries, based on data from the legal_summarization project.
π‘ Example Usage
For complete setup instructions and working inference examples, see:
π GitHub Repo: LLaMA3-demo
This model expects Meta-style structured prompts with two fields: original_text
and reference_summary
.
The original_text
contains the input passage, and the model generates a summary in place of the empty reference_summary
.
ποΈ Training Procedure
This model was trained using Supervised Fine-Tuning (SFT) on legal document summaries using the legal_summarization dataset. LoRA adapters were applied during training and merged afterward using merge_and_unload()
.
βοΈ Framework Versions
- TRL: 0.16.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
π Citations
This model was fine-tuned using TRL.
βοΈ Legal Notice
This model builds on Metaβs LLaMA 3.1 architecture and is governed by the LLaMA 3.1 Community License. All use must comply with Metaβs acceptable use policy.
It was fine-tuned using the legal_summarization dataset for research and educational purposes only.
This model is not intended for commercial use exceeding the limitations described in the Meta license (e.g. more than 700M monthly active users).
- Downloads last month
- 35