roberta-base-tibetan
Model Description
This is a RoBERTa model pre-trained on Tibetan texts. NVIDIA A100-SXM4-40GB took 40 hours 44 minutes for training. You can fine-tune roberta-base-tibetan
for downstream tasks, such as POS-tagging, dependency-parsing, and so on.
How to Use
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-tibetan")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-tibetan")
- Downloads last month
- 39
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.