Model Card for lucio94c/ita_to_eng
This model is a fine-tuned neural machine translation (NMT) model for translating text from Italian to English using the Hugging Face 🤗 transformers
library.
Model Details
Model Description
This model is based on the pre-trained Helsinki-NLP/opus-mt-it-en MarianMT architecture and has been fine-tuned on additional parallel Italian–English data for improved translation quality on general and informal texts.
- Developed by: Lucio Ciracì
- Model type: Sequence-to-sequence neural translation (MarianMT)
- Language(s) (NLP): Italian → English
- License: MIT (or match the original model’s license if not MIT)
- Finetuned from model: Helsinki-NLP/opus-mt-it-en
Model Sources
- Repository: https://huggingface.co/lucio94c/ita_to_eng
- Paper [optional]: MarianMT: Fast Neural Machine Translation in C++
Uses
Direct Use
- Translate Italian text into English for documents, emails, or general understanding.
- Useful for language learners, travelers, or anyone needing fast Italian-to-English translation.
Downstream Use
- Can be fine-tuned further for specific domains (e.g., legal, medical, technical).
Out-of-Scope Use
- Not suitable for translating highly domain-specific or sensitive content without further fine-tuning.
- Not designed for translating code or non-natural language text.
Bias, Risks, and Limitations
- The model may reflect biases present in the training data.
- Quality may decrease on slang, dialects, or specialized terminology.
- Automatic translations can contain errors; always review important text.
Recommendations
- Review translations, especially for professional or sensitive contexts.
- Do not use for critical translations (medical, legal, etc.) without human verification.
How to Get Started with the Model
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("lucio94c/ita_to_eng")
tokenizer = AutoTokenizer.from_pretrained("lucio94c/ita_to_eng")
text = "Ciao, come stai?"
inputs = tokenizer(text, return_tensors="pt")
output = model.generate(**inputs)
print(tokenizer.decode(output[0], skip_special_tokens=True))
- Downloads last month
- 25
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support