NOTE: This repository is now superseded by https://huggingface.co/bertin-project/bertin-roberta-base-spanish. This model corresponds to the beta
version of the model using stepwise over sampling trained for 200k steps with 128 sequence lengths. Version 1 is now available and should be used instead.
BERTIN
BERTIN is a series of BERT-based models for Spanish. This one is a RoBERTa-large model trained from scratch on the Spanish portion of mC4 using Flax, including training scripts.
This is part of the Flax/Jax Community Week, organised by HuggingFace and TPU usage sponsored by Google.
Spanish mC4
The Spanish portion of mC4 containes about 416 million records and 235 billion words.
$ zcat c4/multilingual/c4-es*.tfrecord*.json.gz | wc -l
416057992
$ zcat c4/multilingual/c4-es*.tfrecord-*.json.gz | jq -r '.text | split(" ") | length' | paste -s -d+ - | bc
235303687795
Team members
- Javier de la Rosa (versae)
- Eduardo González (edugp)
- Paulo Villegas (paulo)
- Pablo González de Prado (Pablogps)
- Manu Romero (mrm8488)
- María Grandury (mariagrandury)
Useful links
- Downloads last month
- 38
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.