This is a MicroBERT model for Wolof.

  • Its suffix is -mxp, which means that it was pretrained using supervision from masked language modeling, XPOS tagging, and UD dependency parsing.
  • The unlabeled Wolof data was taken from a February 2022 dump of Uyghur Wikipedia, totaling 517,237 tokens.
  • The UD treebank UD_Wolof-WDT, v2.9, totaling 44,258 tokens, was used for labeled data.

Please see the repository and the paper for more details.

Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.