Text Generation
Transformers
Safetensors
English
Chinese
bailing_moe
code
Mixture of Experts
conversational
custom_code

Ling-Coder-lite

🤖 ModelScope 🤗 Hugging Face 🖥️ GitHub

Introduction

Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. This model demonstrates state-of-the-art performance on 12 coding benchmarks, while simultaneously offering competitive latency and throughput compared to code LLMs of similar size. In addition to open-sourcing the model itself, we also release a substantial amount of code-related data, including synthetic QA, SFT and DPO datasets. More details are described in the technique report Ling-Coder-TR.

Model Downloads

You can download the following table to see the various parameters for your use case. If you are located in mainland China, we also provide the model on modelscope.cn to speed up the download process.

Model #Total Params #Activated Params Context Length Download
Ling-Coder-lite-base 16.8B 2.75B 16K 🤗 HuggingFace
Ling-Coder-lite 16.8B 2.75B 16K 🤗 HuggingFace

Dataset Downloads

Model Samples Download
Ling-Coder-SyntheticQA 24M 🤗 HuggingFace
Ling-Coder-SFT 5M 🤗 HuggingFace
Ling-Coder-DPO 250K 🤗 HuggingFace

Evaluation

Detailed evaluation results are reported in our technical report Ling-Coder-TR.

Quickstart

🤗 Hugging Face Transformers

Here is a code snippet to show you how to use the chat model with transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "inclusionAI/Ling-Coder-lite"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto",
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
    model_name, 
    trust_remote_code=True
)

prompt = "Write a quick sort algorithm in python."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Deployment

Please refer to Github

License

This code repository is licensed under the MIT License.

Citation

@misc{codefuse2025samplemattersleveragingmixtureofexperts,
      title={Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM}, 
      author={Codefuse and Ling Team},
      year={2025},
      eprint={2503.17793},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2503.17793}, 
}
Downloads last month
97
Safetensors
Model size
16.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for inclusionAI/Ling-Coder-lite

Finetuned
(1)
this model

Datasets used to train inclusionAI/Ling-Coder-lite

Collection including inclusionAI/Ling-Coder-lite