Welcome to the xGen-small family!

xGen-small (blog, arXiv) is an enterprise-ready compact LM that combines domain-focused data curation, scalable pre-training, length-extension, and RL fine-tuning to deliver long-context performance at predictable, low cost. This model release is for research purposes only.

Model Series

xGen-small comes in two sizes (4B and 9B) with two variants (pre-trained and post-trained):

Model # Total Params Context Length Variant Download
salesforce/xgen-small-4B-base-r 4B 128k Pre-trained ๐Ÿค— Link
salesforce/xgen-small-4B-instruct-r 4B 128k Post-trained ๐Ÿค— Link
salesforce/xgen-small-9B-base-r 9B 128k Pre-trained ๐Ÿค— Link
salesforce/xgen-small-9B-instruct-r 9B 128k Post-trained ๐Ÿค— Link

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Salesforce/xgen-small-4B-base-r"
tokenizer = AutoTokenizer.from_pretrained(model_name)
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto"
).to(device)

prompt = "What is Salesforce?"
inputs = tokenizer(
    prompt,
    return_tensors="pt",
    padding=False,
    truncation=True
).to(device)

generated = model.generate(**inputs, max_new_tokens=32)
output = tokenizer.decode(
    generated[0],
    skip_special_tokens=True,
)
print(output)

Evaluation

Category Task Llama 3.2-3B Gemma 3-4B Qwen2.5-3B xGen-small 4B Base
General Knowledge & Reasoning ARC-Challenge 50.7 57.9 55.5 57.3
General Knowledge & Reasoning Big-Bench Hard 39.1 40.6 46.1 50.9
General Knowledge & Reasoning HellaSwag 76.3 77.6 74.6 78.5
General Knowledge & Reasoning MMLU 56.1 59.5 65.6 62.6
General Knowledge & Reasoning MMLU-Pro 25.1 28.1 32.0 31.8
General Knowledge & Reasoning TruthfulQA 39.3 39.8 48.9 42.8
General Knowledge & Reasoning WinoGrande 71.6 72.3 70.0 72.7
Math & Science GPQA 28.3 29.3 29.3 29.4
Math & Science GSM8K 28.0 40.6 59.1 71.9
Math & Science MATH 9.0 25.2 40.7 43.1
Coding HumanEval 28.1 34.6 38.3 42.5
Coding HumanEval+ 25.6 28.3 33.9 37.3
Coding MBPP 33.2 42.2 42.0 45.0
Coding MBPP+ 38.5 52.4 52.0 54.0

Citation

@misc{xgensmall,
      title={xGen-small Technical Report}, 
      author={Erik Nijkamp and Bo Pang and Egor Pakhomov and Akash Gokul and Jin Qu and Silvio Savarese and Yingbo Zhou and Caiming Xiong},
      year={2025},
      eprint={2505.06496},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.06496}, 
}

Ethical Considerations

This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people's lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.

Model Licenses

The models are being released under CC-BY-NC-4.0, Copyright ยฉ Salesforce, Inc. All Rights Reserved.

Downloads last month
15
Safetensors
Model size
4.41B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support