RomanSetu
This was trained as part of the paper RomanSetu: Efficiently unlocking multilingual capabilities of Large Language Models via Romanization. The codebase used to train and evaluate this model can be found at https://github.com/AI4Bharat/romansetu.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "ai4bharat/romansetu-cpt-roman-300m"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
- Downloads last month
- 20
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.