Update size advantage: 535% improvement over Gemma 3 4B (was 373%)
Browse files
README.md
CHANGED
@@ -39,7 +39,7 @@ A fine-tuned Gemma 3 1B instruction model specialized for **English-to-Luganda t
|
|
39 |
### Key Performance Insights
|
40 |
|
41 |
π― **Efficiency Leader**: Achieves 6.99 BLEU per billion parameters (highest efficiency ratio)
|
42 |
-
π **Size Advantage**: Outperforms Gemma 3 4B (4x larger) by
|
43 |
π **Competitive Quality**: Achieves similar performance to GPT-5 Mini with known 1B parameter count
|
44 |
β‘ **Practical Deployment**: Runs efficiently on consumer hardware while maintaining quality
|
45 |
|
|
|
39 |
### Key Performance Insights
|
40 |
|
41 |
π― **Efficiency Leader**: Achieves 6.99 BLEU per billion parameters (highest efficiency ratio)
|
42 |
+
π **Size Advantage**: Outperforms Gemma 3 4B (4x larger) by 535% on BLEU score
|
43 |
π **Competitive Quality**: Achieves similar performance to GPT-5 Mini with known 1B parameter count
|
44 |
β‘ **Practical Deployment**: Runs efficiently on consumer hardware while maintaining quality
|
45 |
|