LiLT + XLM-RoBERTa-base
This model is created by combining the Language-Independent Layout Transformer (LiLT) with XLM-RoBERTa, a multilingual RoBERTa model (trained on 100 languages).
This way, we have a LayoutLM-like model for 100 languages :)
- Downloads last month
- 93,659
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support