Model Sources

Model Description

Qwen3-XPlus series models start from Qwen3 instruct models with layer-slective tuning using small amount of parallel data alone.

Meanwhile, comprehensive testing on 16 reasoning tasks, such as bbeh, Livecodebench, Olymmath and so on, shows that it surpasses existing translation-enhanced models and performs on par with Qwen3 instruct models.

πŸ”₯ Excellent Translation Performance

Qwen3-XPlus significantly boost translation performance in both high- and low-resource languages.

image

πŸ”₯ Excellent Reasoning Performance

image

Trained Data Covered Languages

  • en (English)
  • ar (Arabic)
  • bn (Bengali)
  • cs (Czech)
  • de (German)
  • es (Spanish)
  • fr (French)
  • hu (Hungarian)
  • ja (Japanese)
  • ko (Korean)
  • ru (Russian)
  • sr (Serbian)
  • sw (Swahili)
  • te (Telugu)
  • th (Thai)
  • vi (Vietnamese)
  • zh (Chinese)

Model Index

We implement multiple versions of the Qwen3-XPlus model, the model links are as follows:

Model LLaMAX
Qwen3-XPlus-17langs-8B Link
πŸ‘‰ Qwen3-XPlus-17langs-14B Link

Citation

If our model helps your work, please cite this paper:

@misc{gaoLLaMAX2YourTranslationEnhanced2025,
  title = {{{LLaMAX2}}: {{Your Translation-Enhanced Model}} Also {{Performs Well}} in {{Reasoning}}},
  shorttitle = {{{LLaMAX2}}},
  author = {Gao, Changjiang and Huang, Zixian and Gong, Jingyang and Huang, Shujian and Li, Lei and Yuan, Fei},
  year = {2025},
  month = oct,
  number = {arXiv:2510.09189},
  eprint = {2510.09189},
  primaryclass = {cs},
  publisher = {arXiv},
  doi = {10.48550/arXiv.2510.09189},
  archiveprefix = {arXiv}
}

Downloads last month
14
Safetensors
Model size
15B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for LLaMAX/Qwen3-XPlus-17langs-14B

Finetuned
Qwen/Qwen3-14B
Finetuned
(134)
this model
Quantizations
4 models

Collection including LLaMAX/Qwen3-XPlus-17langs-14B