enijkamp's picture
Update README.md
f95402e verified
metadata
license: cc-by-nc-4.0
language:
  - en
library_name: transformers

Welcome to the xGen-small family!

xGen-small (blog, arXiv) is an enterprise-ready compact LM that combines domain-focused data curation, scalable pre-training, length-extension, and RL fine-tuning to deliver long-context performance at predictable, low cost. This model release is for research purposes only.

Model Series

xGen-small comes in two sizes (4B and 9B) with two variants (pre-trained and post-trained):

Model # Total Params Context Length Variant Download
salesforce/xgen-small-4B-base-r 4B 128k Pre-trained 🤗 Link
salesforce/xgen-small-4B-instruct-r 4B 128k Post-trained 🤗 Link
salesforce/xgen-small-9B-base-r 9B 128k Pre-trained 🤗 Link
salesforce/xgen-small-9B-instruct-r 9B 128k Post-trained 🤗 Link

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Salesforce/xgen-small-9B-base-r"
tokenizer = AutoTokenizer.from_pretrained(model_name)
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto"
).to(device)

prompt = "What is Salesforce?"
inputs = tokenizer(
    prompt,
    return_tensors="pt",
    padding=False,
    truncation=True
).to(device)

generated = model.generate(**inputs, max_new_tokens=32)
output = tokenizer.decode(
    generated[0],
    skip_special_tokens=True,
)
print(output)

Evaluation

Category Task Llama 3.1-8B Granite 3.3-8B Qwen2.5-7B xGen-small 9B Base
General Knowledge & Reasoning ARC-Challenge 58.0 62.5 63.7 67.4
General Knowledge & Reasoning Big-Bench Hard 46.3 46.8 53.6 58.2
General Knowledge & Reasoning HellaSwag 81.8 83.0 80.0 83.7
General Knowledge & Reasoning MMLU 65.1 62.7 74.2 71.1
General Knowledge & Reasoning MMLU-Pro 32.7 31.3 43.7 39.8
General Knowledge & Reasoning TruthfulQA 45.2 52.2 56.4 48.6
General Knowledge & Reasoning WinoGrande 76.9 80.3 76.1 78.6
Math & Science GPQA 31.9 30.3 31.4 32.0
Math & Science GSM8K 55.6 61.4 79.1 83.2
Math & Science MATH 22.0 30.9 50.2 52.5
Coding HumanEval 37.3 38.9 55.2 53.9
Coding HumanEval+ 31.4 34.3 47.7 47.9
Coding MBPP 45.0 43.5 57.1 50.1
Coding MBPP+ 51.0 48.1 64.8 57.6

Citation

@misc{xgensmall,
      title={xGen-small Technical Report}, 
      author={Erik Nijkamp and Bo Pang and Egor Pakhomov and Akash Gokul and Jin Qu and Silvio Savarese and Yingbo Zhou and Caiming Xiong},
      year={2025},
      eprint={2505.06496},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.06496}, 
}

Ethical Considerations

This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people's lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.

Model Licenses

The models are being released under CC-BY-NC-4.0, Copyright © Salesforce, Inc. All Rights Reserved.