Model Card for qwen2.5-0.5B-Instruct-pruned-distill-Inshort
SFT(model=qwen2.5-0.5B-Instruct-pruned-Inshort, mode=behaviour cloning) = qwen2.5-0.5B-Instruct-pruned-distill-Inshort
Model Details
Model Description
These model is a fine tuned version of qwen2.5-0.5B-Instruct-pruned-Inshort using the dataset Inshorts-english
NOTE
This model is part of my project, where I explore pruning a capable teacher model and recovering its performance through distillation (specifically, behavior cloning) and supervised fine-tuning (SFT), focused on an Inshorts-style summarization task.
This model will act as a distilled model.
Training Procedure
- All Qwen2DecoderLayer's are trainable, rest of the model is fixed.
- SFT(supervised fine-tuning) training method is used.
Training Hyperparameters
- Batch = 8, Gradient Accumulation = 1
- Warmup Steps = 50
- epochs = 1
- Optimizer = adamw_8bit
- Learning Rate = 5e-5
- Lr Scheduler Type = linear
Evaluation
The initial evaluation began with ROUGE SCORE; however, this approach was quickly abandoned as ROUGE fails to capture semantic meaning and contextual understandingβboth of which are crucial for evaluating abstractive summarization.
As a result, a custom evaluation pipeline was adopted. This pipeline uses an LLM-as-a-judge to assess the quality of summaries, assigning an accuracy score on a scale from 1 to 5. Side wise human evaluation on few selected datapoints were also done.
Check out the Colab Notebook for the code of custom evaluation pipeline
LLM-as-a-judge details
- model = Qwen/Qwen2.5-32B-Instruct
- sampling technique = greedy sampling
- prompt =
system_prompt_for_accuracy = '''YOU ARE A HIGHLY RELIABLE NEWS HEADLINE EVALUATION JUDGE, TRAINED TO ASSESS PREDICTED HEADLINES BASED SOLELY ON THEIR ACCURACY AND FAITHFULNESS TO THE ORIGINAL NEWS CONTENT. YOUR PRIMARY OBJECTIVE IS TO ENSURE THAT THE PREDICTED HEADLINES ARE:
1. **NOT MISLEADING OR HALLUCINATED**: The predicted headline must accurately reflect the original news content without adding false information or exaggerating details.
2. **FAITHFUL TO THE ORIGINAL NEWS CONTENT**: The headline should summarize the essence of the news while maintaining neutrality and factual correctness.
### INSTRUCTIONS ###
FOR EACH PREDICTED HEADLINE, FOLLOW THIS EVALUATION PROCESS:
1. **UNDERSTAND THE INPUTS:**
- ORIGINAL_NEWS_CONTENT: The full news article that serves as the source.
- PREDICTED_HEADLINE: The generated headline to be evaluated.
2. **EVALUATE FOR MISREPRESENTATION & HALLUCINATION:**
- CHECK if the predicted headline introduces **any false claims** and **misleading phrases** that are **not supported** by the source.
- RATE on a scale of 1-5:
- (1) **Severely Misleading** β The headline contains major inaccuracies, false claims, or is entirely unrelated to the news content.
- (2) **Largely Inaccurate** β The headline distorts key facts, introduces misleading implications, or exaggerates information.
- (3) **Partially Accurate** β The headline is mostly correct but includes minor distortions,or slightly misleading phrasing.
- (4) **Mostly Accurate** β The headline aligns well with the source but may have slight nuances or wording that could be improved.
- (5) **Fully Accurate** β The headline is entirely faithful to the source, correctly summarizing key details with no factual distortions.
### WHAT NOT TO DO ###
- NEVER ACCEPT A HEADLINE THAT IS FACTUALLY INCORRECT OR MISLEADING.
- NEVER IGNORE SUBTLE DIFFERENCES IN MEANING THAT COULD CHANGE THE FACTUAL ACCURACY.
### OUTPUT FORMAT ###
Your evaluation should be structured as follows:
```json
{
"predicted_headline": "...",
"score": "X/5",
"feedback": "..."
}
```'''
user_prompt_for_accuracy = '''News Content: {content}
Predicted Headline: {predicted_headline}
'''
Results
β Accuracy Score [main evaluation criteria]
Metric | Value |
---|---|
Accuracy Score | 3.1466 |
π ROUGE Score
Metric | Score |
---|---|
ROUGE-1 | 0.3622 |
ROUGE-2 | 0.1546 |
ROUGE-L | 0.3207 |
ROUGE-Lsum | 0.3205 |
π― Accuracy-Aware ROUGE Score
Metric | Score |
---|---|
ROUGE-1 | 0.2279 |
ROUGE-2 | 0.0973 |
ROUGE-L | 0.2018 |
ROUGE-Lsum | 0.2017 |
Gitub Repository
All Models
- Downloads last month
- 15