Adapter-Tuned NLLB Model for Bidirectional Odia ↔ German Translation
This is an adapter-tuned version of facebook/nllb-200-distilled-600M
specialized for bidirectional translation between Odia (ory_Orya
) and German (deu_Latn
).
This model was developed as part of a thesis project focused on effective fine-tuning strategies for low-resource language pairs within the journalistic domain. It was fine-tuned on a carefully constructed hybrid dataset, combining a larger set of high-quality, human-validated translations with a smaller set of machine-translated sentences to expand lexical, contextual and grammatical coverage.
Live Demo:
- You can test this model live on its Hugging Face Spaces Gradio App.
Model Details
- Base Model:
facebook/nllb-200-distilled-600M
- Languages: Odia (or), German (de)
- Fine-tuning Domain: Journalistic text sourced from contemporary Odia newspapers (Dharitri & Sambad).
- Developed by: Abhinandan Samal
- Thesis: Enhancing Contextual Understanding in Low-Resource Languages Using Multilingual Transformers
- University: IU International University of Applied Sciences
- Date: July 4, 2025
Fine-tuning Details
Training and Evaluation Data
The model was fine-tuned on a meticulously prepared parallel corpus. Initially, 3,676 unique parallel line pairs were collected. Each "line" in the corpus was designed to provide contextual information for the model, typically containing 2-3 sentences, although some lines consist of a single sentence.
The data originates from two specific Odia newspapers and encompasses a diverse range of news domains, including National, International, Lifestyle, Sports, Trade, Environmental, Science and Technology, Leisure, Commerce, Metro, State, and Editorial.
The curation process involved distinct quality control steps for each language:
- Odia Corpus Validation: All 3,676 lines on the Odia side of the parallel corpus underwent thorough evaluation and validation by a native Odia speaker (the author), ensuring high linguistic fidelity.
- German Corpus Curation:
- A high-quality subset of 2,000 German lines (corresponding to 2,000 of the original parallel pairs) was meticulously human-evaluated and corrected by a native German speaker. This segment forms a core, highly accurate dataset.
- The remaining 1,676 German lines (corresponding to the other original parallel pairs) were generated using Google Translate. These lines were utilized to broaden the model's exposure to a wider range of vocabulary and grammatical structures.
Following this rigorous curation, the corpus was transformed into a final bidirectional training dataset, resulting in 7,352 distinct training instances. This was achieved by creating two training examples from each parallel pair, utilizing task-specific prefixes (translate Odia to German:
and translate German to Odia:
). The overall size of this dataset was carefully managed and selected as a practical upper limit dictated by the memory and computational constraints of the available single-GPU training environment (NVIDIA T4 on Google Colab Pro).
Here, you can check the dataset.
Training Procedure
The model was fine-tuned using PyTorch and the Hugging Face Seq2SeqTrainer
.
Key Hyperparameters:
- Learning Rate: 1e-3
- Number of Epochs: 5
- Effective Batch Size: 16 (
per_device_train_batch_size=8
withgradient_accumulation_steps=2
) - Optimizer: adafactor
- Precision: Mixed Precision (
fp16=True
) - Memory Optimization:
gradient_checkpointing=True
Evaluation Results
The fine-tuned model's performance was rigorously evaluated against the original facebook/nllb-200-distilled-600M baseline and the fully fine-tuned facebook/nllb-200-distilled-600M models on a held-out test set composed partially (77%) of human-validated sentence pairs. I report scores across three standard machine translation metrics: BLEU (higher is better), chrF (higher is better), and TER (Translation Edit Rate, where lower is better).
Metric | Odia → German (Baseline) | Odia → German (Fully Fine-Tuned) | Odia → German (Adapter-Based Fine-Tuned) | German → Odia (Baseline) | German → Odia (Fully Fine-Tuned) | Odia → German (Adapter-Based Fine-Tuned) |
---|---|---|---|---|---|---|
BLEU | 72.6444 | 65.1630 | 74.6235 | 14.4641 | 21.2164 | 26.3310 |
chrF | 62.2442 | 78.9527 | 82.3278 | 44.5058 | 48.5377 | 45.4740 |
TER | 63.0271 | 39.3919 | 39.3919 | 106.0486 | 77.4971 | 73.4183 |
Interpretation of Results
The evaluation reveals a nuanced and compelling narrative about the effectiveness of different fine-tuning strategies, with the Adapter-Based (LoRA) Fine-Tuning emerging as the superior overall methodology for this translation task.
German → Odia
(Generating the Low-Resource Language): LoRA Excels in Accuracy and Fluency
In the more challenging direction of generating the morphologically rich, low-resource language (Odia), the Adapter-Tuned (LoRA) model delivered the most significant gains and the best overall performance.
It achieved the highest BLEU score of 26.33, a remarkable +11.87 point improvement over the baseline, indicating it was the most effective at learning correct lexical choice and phrase-level mappings.
Furthermore, it also attained the lowest (best) TER score of 73.42, a massive 32.6-point reduction in edit rate, signifying that its output was the most structurally sound and required the least human editing effort.
The Fully Fine-Tuned model, while also a strong performer, did not match the peak BLEU or TER scores of the more efficient LoRA method, though it did achieve the highest chrF score, suggesting a slight edge in morphological precision.
Odia → German
(Generating the High-Resource Language): LoRA is the Decisive Winner
For the task of translating into the high-resource language (German), the Adapter-Tuned (LoRA) model was the undisputed top performer, outclassing both the powerful baseline and the fully fine-tuned model across all key metrics.
The LoRA model not only surpassed the strong baseline's BLEU score but achieved the highest score overall at 74.62.
It recorded an outstanding chrF score of 82.33, a massive +20.1 point improvement over the baseline, demonstrating its exceptional ability to handle both phrase-level translation and character-level tasks like transliterating named entities.
Crucially, the LoRA model also produced the most fluent output, matching the Fully Fine-Tuned model with the best-in-class TER score of 39.39.
In summary, the parameter-efficient LoRA methodology proved to be the most effective overall strategy, delivering the best performance in the majority of metrics across both translation directions.
How to Use
The easiest way to use this model is with the translation
pipeline from the transformers
library. The model was trained to be bidirectional, and you can control the translation direction by specifying the src_lang
and tgt_lang
during the call.
from transformers import pipeline
# Load the translation pipeline with your fine-tuned model
model_id = "abhinandansamal/nllb-200-distilled-600M-LoRA-finetuned-odia-german-bidirectional"
translator = pipeline("translation", model=model_id, device_map="auto")
# --- Example 1: Translate Odia to German ---
odia_text = "ଆଜି ପାଗ ବହୁତ ଭଲ ଅଛି।"
german_translation = translator(
odia_text,
src_lang="ory_Orya",
tgt_lang="deu_Latn"
)
print(f"Odia Input: {odia_text}")
print(f"German Output: {german_translation[0]['translation_text']}")
# Expected Output: Heute ist das Wetter sehr gut.
# --- Example 2: Translate German to Odia ---
german_text = "Wie ist deine Gesundheit?"
odia_translation = translator(
german_text,
src_lang="deu_Latn",
tgt_lang="ory_Orya"
)
print(f"\nGerman Input: {german_text}")
print(f"Odia Output: {odia_translation[0]['translation_text']}")
# Expected Output: ତୁମର ସ୍ବାସ୍ଥ୍ୟ ଅବସ୍ଥା କ'ଣ?
Note: While the model was trained with task prefixes (translate Odia to German:), using the translation pipeline with src_lang and tgt_lang arguments is the cleaner, recommended method for inference, as it abstracts this detail away.
Intended Use
This model is primarily intended for translating journalistic text between Odia and German. Given its training on articles from various news domains (e.g., National, International, Lifestyle, Sports, Science and Technology), it is suitable for academic research, cross-lingual information retrieval from news sources, and as a supportive tool for language learners focusing on news-related content in this specific language pair.
Limitations & Bias
- Domain Specificity: While encompassing various news domains, the model is not optimized for vastly different fields such as legal, medical, literary, or informal conversational text. Its performance is expected to be significantly lower on out-of-domain content outside of journalism.
- Data-Inherited Bias: The model inherits stylistic and topical biases from its training data sources. Despite covering multiple news domains, the primary sources are two specific Odia newspapers. Furthermore, the inclusion of Google Translate-generated German lines in a portion of the training data may introduce or reinforce specific stylistic patterns inherent to machine translation outputs.
Achievements with Current Data Constraints
Despite the constraints in computational resources (single-GPU training on an NVIDIA T4) and the specialized dataset size (7,352 bidirectional instances), this research has achieved significant positive outcomes, demonstrating the viability of adapting large models for low-resource pairs.
- Decisive Performance Gains: Both fine-tuning methodologies yielded substantial improvements over the zero-shot baseline. The Adapter-Tuned (LoRA) model emerged as the top performer for the Odia → German direction, achieving a state-of-the-art BLEU score of 74.62 and chrF of 82.33. For the more challenging German → Odia task, the LoRA model also delivered the best accuracy and fluency, achieving the highest BLEU score of 26.33 and the lowest TER score of 73.42.
- Demonstrated Practical Viability: The successful fine-tuning and subsequent deployment of two functional web applications prove that it is practically feasible to create high-quality, specialized translation tools for low-resource languages. The results show that even with a limited, hybrid-quality corpus, significant improvements in accuracy, fluency, and character-level fidelity can be achieved, with the parameter-efficient LoRA method proving to be a particularly effective and compelling strategy.
Areas for Future Improvement
To further enhance the model's performance, generalizability, and address existing limitations, the following factors are key considerations for future development:
- Expanded High-Quality Data: Increasing the size and diversity of the human-validated parallel corpus, particularly from domains beyond journalism, would be crucial for improving robustness and reducing reliance on machine-translated data.
- Refined German Corpus Curation: Exploring strategies to further reduce the dependency on machine-translated content for the German side, potentially through more extensive human validation or alternative data acquisition methods.
- Addressing Directional Nuances: Further investigation into the specific performance characteristics of each translation direction (e.g., the BLEU score behavior in Odia → German) could lead to targeted optimizations for balanced bidirectional performance.
- Advanced Data Augmentation: Exploring more sophisticated data augmentation techniques could effectively expand the training data's diversity without necessarily requiring more manual collection.
- Model Architecture & Hyperparameter Optimization: Continued experimentation with different model architectures, fine-tuning strategies, and hyperparameter configurations could yield additional performance gains.
- Bias Mitigation: Proactive strategies to identify and mitigate potential biases inherited from the training data sources could improve fairness and broader applicability.
Citation
If you use this model or the associated methodology in your research, please cite the following thesis:
@mastersthesis{SamalThesis2025,
author = Abhinandan Samal,
title = Enhancing Contextual Understanding in Low-Resource Languages Using Multilingual Transformers,
school = IU International University of Applied Sciences,
year = 2025
}
Model tree for abhinandansamal/nllb-200-distilled-600M-LoRA-finetuned-odia-german-bidirectional
Base model
facebook/nllb-200-distilled-600M