--- inference: false language: - bg license: mit datasets: - oscar - chitanka - wikipedia tags: - torch --- # BERT BASE (cased) Pretrained model on Bulgarian language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is cased: it does make a difference between bulgarian and Bulgarian. The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/). The model was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925). ### How to use Here is how to use this model in PyTorch: ```python >>> from transformers import pipeline >>> >>> model = pipeline( >>> 'fill-mask', >>> model='rmihaylov/bert-base-theseus-bg', >>> tokenizer='rmihaylov/bert-base-theseus-bg', >>> device=0, >>> revision=None) >>> >>> output = model("Кой ще поеме [MASK] отговорност?") >>> print(output) ```