XNLI Base Model
This model was trained on the XNLI dataset using random data selection.
Training Parameters
- Dataset: XNLI
- Mode: Base
- Selection Method: Random
- Train Size: 2400 examples
- Epochs: 8
- Batch Size: 16
- Effective Batch Size: 64 (batch_size * gradient_accumulation_steps)
- Learning Rate: 1e-05
- Patience: 6
- Max Length: 256
- Gradient Accumulation Steps: 4
- Warmup Ratio: 0.1
- Weight Decay: 0.01
- Optimizer: AdamW
- Scheduler: cosine_with_warmup
- Random Seed: 42
Performance
- Overall Accuracy: 65.47%
- Overall Loss: 0.0141
Language-Specific Performance
- English (EN): 72.22%
- German (DE): 67.60%
- Arabic (AR): 63.21%
- Spanish (ES): 68.72%
- Hindi (HI): 62.04%
- Swahili (SW): 59.00%
Model Information
- Base Model: bert-base-multilingual-cased
- Task: Natural Language Inference
- Languages: 6 languages (EN, DE, AR, ES, HI, SW)