NLPblue commited on
Commit
991ea16
·
verified ·
1 Parent(s): b958b17

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -4,12 +4,14 @@ license: apache-2.0
4
 
5
  # Introduction
6
 
7
- We present **Tongyi-DeepResearch**, an agentic large language model featuring 30 billion total parameters, with only 3 billion activated per token. Developed by Tongyi Lab, the model is specifically designed for **long-horizon, deep information-seeking** tasks. Tongyi-DeepResearch demonstrates state-of-the-art performance across a range of agentic search benchmarks, including BrowserComp-EN, BrowserComp-ZH, GAIA, Humanity's Last Exam, xbench-DeepSearch, and WebWalkerQA.
8
 
9
 
 
 
10
  ## Key Features
11
 
12
- - ⚙️ **Fully automated synthetic data generation pipeline**: Covers both the pre-training stage (data creation, filtering, and scaling) and the post-training stage (evaluation, refinement, and filtering).
13
  - 🔄 **Large-scale continual pre-training on agentic data**: Leveraging diverse, high-quality agentic interaction data to extend model capabilities, maintain freshness, and strengthen reasoning performance.
14
  - 🔁 **End-to-end reinforcement learning**: We employ a strictly on-policy RL approach based on a customized Group Relative Policy Optimization framework, with token-level policy gradients, leave-one-out advantage estimation, and selective filtering of negative samples to stabilize training in a non‑stationary environment.
15
  - 🤖 **Agent Inference Paradigm Compatibility**: At inference, Tongyi-DeepResearch is compatible with two inference paradigms: ReAct, for rigorously evaluating the model's core intrinsic abilities, and an IterResearch-based 'Heavy' mode, which uses a test-time scaling strategy to unlock the model's maximum performance ceiling.
@@ -26,4 +28,4 @@ You can download the model then run the inference scipts in https://github.com/A
26
  year={2025},
27
  howpublished={\url{https://github.com/Alibaba-NLP/DeepResearch}}
28
  }
29
- ```
 
4
 
5
  # Introduction
6
 
7
+ We present **Tongyi DeepResearch**, an agentic large language model featuring 30 billion total parameters, with only 3 billion activated per token. Developed by Tongyi Lab, the model is specifically designed for **long-horizon, deep information-seeking** tasks. Tongyi-DeepResearch demonstrates state-of-the-art performance across a range of agentic search benchmarks, including BrowserComp-EN, BrowserComp-ZH, GAIA, Humanity's Last Exam, xbench-DeepSearch, and WebWalkerQA.
8
 
9
 
10
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63fc4c00a3c067e62899d32b/OhQCYYJu1LhrS446Qct5D.png)
11
+
12
  ## Key Features
13
 
14
+ - ⚙️ **Fully automated synthetic data generation pipeline**: We design a highly scalable data synthesis pipeline, which is fully automatic and empowers agentic pre-training, supervised fine-tuning, and reinforcement learning.
15
  - 🔄 **Large-scale continual pre-training on agentic data**: Leveraging diverse, high-quality agentic interaction data to extend model capabilities, maintain freshness, and strengthen reasoning performance.
16
  - 🔁 **End-to-end reinforcement learning**: We employ a strictly on-policy RL approach based on a customized Group Relative Policy Optimization framework, with token-level policy gradients, leave-one-out advantage estimation, and selective filtering of negative samples to stabilize training in a non‑stationary environment.
17
  - 🤖 **Agent Inference Paradigm Compatibility**: At inference, Tongyi-DeepResearch is compatible with two inference paradigms: ReAct, for rigorously evaluating the model's core intrinsic abilities, and an IterResearch-based 'Heavy' mode, which uses a test-time scaling strategy to unlock the model's maximum performance ceiling.
 
28
  year={2025},
29
  howpublished={\url{https://github.com/Alibaba-NLP/DeepResearch}}
30
  }
31
+ ```