gabriellarson commited on
Commit
ed6ee32
·
verified ·
1 Parent(s): 3415d8d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +211 -0
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - ERNIE4.5
9
+ library_name: transformers
10
+ base_model:
11
+ - baidu/ERNIE-4.5-300B-A47B-PT
12
+ ---
13
+ <div align="center" style="line-height: 1;">
14
+ <a href="https://ernie.baidu.com/" target="_blank" style="margin: 2px;">
15
+ <img alt="Chat" src="https://img.shields.io/badge/🤖_Chat-ERNIE_Bot-blue" style="display: inline-block; vertical-align: middle;"/>
16
+ </a>
17
+ <a href="https://huggingface.co/baidu" target="_blank" style="margin: 2px;">
18
+ <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Baidu-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
19
+ </a>
20
+ <a href="https://github.com/PaddlePaddle/ERNIE" target="_blank" style="margin: 2px;">
21
+ <img alt="Github" src="https://img.shields.io/badge/GitHub-ERNIE-000?logo=github&color=0000FF" style="display: inline-block; vertical-align: middle;"/>
22
+ </a>
23
+ <a href="https://ernie.baidu.com/blog/ernie4.5" target="_blank" style="margin: 2px;">
24
+ <img alt="Blog" src="https://img.shields.io/badge/🖖_Blog-ERNIE4.5-A020A0" style="display: inline-block; vertical-align: middle;"/>
25
+ </a>
26
+ </div>
27
+ <div align="center" style="line-height: 1;">
28
+ <a href="#license" style="margin: 2px;">
29
+ <img alt="License" src="https://img.shields.io/badge/License-Apache2.0-A5de54" style="display: inline-block; vertical-align: middle;"/>
30
+ </a>
31
+ </div>
32
+ # ERNIE-4.5-300B-A47B
33
+
34
+ > [!NOTE]
35
+ > Note: "**-Paddle**" models use [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) weights, while "**-PT**" models use Transformer-style PyTorch weights.
36
+
37
+
38
+ ## ERNIE 4.5 Highlights
39
+
40
+ The advanced capabilities of the ERNIE 4.5 models, particularly the MoE-based A47B and A3B series, are underpinned by several key technical innovations:
41
+
42
+ 1. **Multimodal Heterogeneous MoE Pre-Training:** Our models are jointly trained on both textual and visual modalities to better capture the nuances of multimodal information and improve performance on tasks involving text understanding and generation, image understanding, and cross-modal reasoning. To achieve this without one modality hindering the learning of another, we designed a *heterogeneous MoE structure*, incorporated *modality-isolated routing*, and employed *router orthogonal loss* and *multimodal token-balanced loss*. These architectural choices ensure that both modalities are effectively represented, allowing for mutual reinforcement during training.
43
+
44
+ 2. **Scaling-Efficient Infrastructure:** We propose a novel heterogeneous hybrid parallelism and hierarchical load balancing strategy for efficient training of ERNIE 4.5 models. By using intra-node expert parallelism, memory-efficient pipeline scheduling, FP8 mixed-precision training and finegrained recomputation methods, we achieve remarkable pre-training throughput. For inference, we propose *multi-expert parallel collaboration* method and *convolutional code quantization* algorithm to achieve 4-bit/2-bit lossless quantization. Furthermore, we introduce PD disaggregation with dynamic role switching for effective resource utilization to enhance inference performance for ERNIE 4.5 MoE models. Built on [PaddlePaddle](https://github.com/PaddlePaddle/Paddle), ERNIE 4.5 delivers high-performance inference across a wide range of hardware platforms.
45
+
46
+ 3. **Modality-Specific Post-Training:** To meet the diverse requirements of real-world applications, we fine-tuned variants of the pre-trained model for specific modalities. Our LLMs are optimized for general-purpose language understanding and generation. The VLMs focuses on visuallanguage understanding and supports both thinking and non-thinking modes. Each model employed a combination of *Supervised Fine-tuning (SFT)*, *Direct Preference Optimization (DPO)* or a modified reinforcement learning method named *Unified Preference Optimization (UPO)* for post-training.
47
+
48
+ ## Model Overview
49
+
50
+ ERNIE-4.5-300B-A47B is a text MoE Post-trained model, with 300B total parameters and 47B activated parameters for each token. The following are the model configuration details:
51
+
52
+ |Key|Value|
53
+ |-|-|
54
+ |Modality|Text|
55
+ |Training Stage|Pretraining|
56
+ |Params(Total / Activated)|300B / 47B|
57
+ |Layers|54|
58
+ |Heads(Q/KV)|64 / 8|
59
+ |Text Experts(Total / Activated)|64 / 8|
60
+ |Vision Experts(Total / Activated)|64 / 8|
61
+ |Context Length|131072|
62
+
63
+ ## Quickstart
64
+
65
+ ### Using `transformers` library
66
+
67
+ **Note**: Before using the model, please ensure you have the `transformers` library installed (version 4.50.0 or higher)
68
+
69
+ The following contains a code snippet illustrating how to use the model generate content based on given inputs.
70
+
71
+ ```python
72
+ from transformers import AutoModelForCausalLM, AutoTokenizer
73
+ model_name = "baidu/ERNIE-4.5-300B-A47B-PT"
74
+ # load the tokenizer and the model
75
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
76
+ model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
77
+ # prepare the model input
78
+ prompt = "Give me a short introduction to large language model."
79
+ messages = [
80
+ {"role": "user", "content": prompt}
81
+ ]
82
+ text = tokenizer.apply_chat_template(
83
+ messages,
84
+ tokenize=False,
85
+ add_generation_prompt=True
86
+ )
87
+ model_inputs = tokenizer([text], add_special_tokens=False, return_tensors="pt").to(model.device)
88
+ # conduct text completion
89
+ generated_ids = model.generate(
90
+ model_inputs.input_ids,
91
+ max_new_tokens=1024
92
+ )
93
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
94
+ # decode the generated ids
95
+ generate_text = tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n")
96
+ print("generate_text:", generate_text)
97
+ ```
98
+
99
+ ### Using vLLM
100
+
101
+ [vllm](https://github.com/vllm-project/vllm/tree/main) github library. Python-only [build](https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#set-up-using-python-only-build-without-compilation).
102
+
103
+ ```bash
104
+ # 80G * 16 GPU
105
+ vllm serve baidu/ERNIE-4.5-300B-A47B-PT --trust-remote-code
106
+ ```
107
+
108
+ ```bash
109
+ # FP8 online quantification 80G * 16 GPU
110
+ vllm serve baidu/ERNIE-4.5-300B-A47B-PT --trust-remote-code --quantization fp8
111
+ ```
112
+
113
+ ## Best Practices
114
+
115
+ ### **Sampling Parameters**
116
+
117
+ To achieve optimal performance, we suggest using `Temperature=0.8`, `TopP=0.8`.
118
+
119
+ ### Prompts for Web Search
120
+
121
+ For Web Search, {references}, {date}, and {question} are arguments.
122
+
123
+ For Chinese question, we use the prompt:
124
+
125
+ ```python
126
+ ernie_search_zh_prompt = \
127
+ '''下面你会收到当前时间、多个不同来源的参考文章和一段对话。你的任务是阅读多个参考文章,并根据参考文章中的信息回答对话中的问题。
128
+ 以下是当前时间和参考文章:
129
+ ---------
130
+ #当前时间
131
+ {date}
132
+ #参考文章
133
+ {references}
134
+ ---------
135
+ 请注意:
136
+ 1. 回答必须结合问题需求和当前时间,对参考文章的可用性进行判断,避免在回答中使用错误或过时的信息。
137
+ 2. 当参考文章中的信息无法准确地回答问题时,你需要在回答中提供获取相应信息的建议,或承认无法提供相应信息。
138
+ 3. 你需要优先根据百科、官网、权威机构、专业网站等高权威性来源的信息来回答问题。
139
+ 4. 回复需要综合参考文章中的相关数字、案例、法律条文、公式等信息,使你的答案更专业。
140
+ 5. 当问题属于创作类任务时,需注意以下维度:
141
+ - 态度鲜明:观点、立场清晰明确,避免模棱两可,语言果断直接
142
+ - 文采飞扬:用词精准生动,善用修辞手法,增强感染力
143
+ - 有理有据:逻辑严密递进,结合权威数据/事实支撑论点
144
+ ---------
145
+ 下面请结合以上信息,回答问题,补全对话
146
+ {question}'''
147
+ ```
148
+
149
+ For English question, we use the prompt:
150
+
151
+ ```python
152
+ ernie_search_en_prompt = \
153
+ '''
154
+ Below you will be given the current time, multiple references from different sources, and a conversation. Your task is to read the references and use the information in them to answer the question in the conversation.
155
+ Here are the current time and the references:
156
+ ---------
157
+ #Current Time
158
+ {date}
159
+ #References
160
+ {references}
161
+ ---------
162
+ Please note:
163
+ 1. Based on the question’s requirements and the current time, assess the usefulness of the references to avoid using inaccurate or outdated information in the answer.
164
+ 2. If the references do not provide enough information to accurately answer the question, you should suggest how to obtain the relevant information or acknowledge that you are unable to provide it.
165
+ 3. Prioritize using information from highly authoritative sources such as encyclopedias, official websites, authoritative institutions, and professional websites when answering questions.
166
+ 4. Incorporate relevant numbers, cases, legal provisions, formulas, and other details from the references to make your answer more professional.
167
+ 5. For creative tasks, keep these dimensions in mind:
168
+ - Clear attitude: Clear views and positions, avoid ambiguity, and use decisive and direct language
169
+ - Brilliant writing: Precise and vivid words, good use of rhetoric, and enhance the appeal
170
+ - Well-reasoned: Rigorous logic and progressive, combined with authoritative data/facts to support the argument
171
+ ---------
172
+ Now, using the information above, answer the question and complete the conversation:
173
+ {question}'''
174
+ ```
175
+
176
+ Parameter notes:
177
+
178
+ * {question} is the user’s question
179
+ * {date} is the current time, and the recommended format is “YYYY-MM-DD HH:MM:SS, Day of the Week, Beijing/China.”
180
+ * {references} is the references, and the recommended format is:
181
+
182
+ ```text
183
+ ##参考文章1
184
+ 标题:周杰伦
185
+ 文章发布时间:2025-04-20
186
+ 内容:周杰伦(Jay Chou),1979年1月18日出生于台湾省新北市,祖籍福建省永春县,华语流行乐男歌手、音乐人、演员、导演、编剧,毕业于淡江中学。2000年,发行个人首张音乐专辑《Jay》。...
187
+ 来源网站网址:baike.baidu.com
188
+ 来源网站的网站名:百度百科
189
+ ##参考文章2
190
+ ...
191
+ ```
192
+
193
+ ## License
194
+
195
+ The ERNIE 4.5 models are provided under the Apache License 2.0. This license permits commercial use, subject to its terms and conditions. Copyright (c) 2025 Baidu, Inc. All Rights Reserved.
196
+
197
+ ## Citation
198
+
199
+ If you find ERNIE 4.5 useful or wish to use it in your projects, please kindly cite our technical report:
200
+
201
+ ```bibtex
202
+ @misc{ernie2025technicalreport,
203
+ title={ERNIE 4.5 Technical Report},
204
+ author={Baidu ERNIE Team},
205
+ year={2025},
206
+ eprint={},
207
+ archivePrefix={arXiv},
208
+ primaryClass={cs.CL},
209
+ url={}
210
+ }
211
+ ```