hhoh commited on
Commit
5589706
·
verified ·
1 Parent(s): d31275c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +125 -0
README.md CHANGED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ <p align="center">
4
+ <img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
5
+ </p><p></p>
6
+
7
+
8
+ <p align="center">
9
+ 🤗&nbsp;<a href="https://huggingface.co/collections/tencent/hunyuan-mt-68b42f76d473f82798882597"><b>Hugging Face</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
10
+ <img src="https://avatars.githubusercontent.com/u/109945100?s=200&v=4" width="16"/>&nbsp;<a href="https://modelscope.cn/collections/Hunyuan-MT-2ca6b8e1b4934f"><b>ModelScope</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
11
+ </p>
12
+
13
+ <p align="center">
14
+ 🖥️&nbsp;<a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
15
+ 🕹️&nbsp;<a href="https://hunyuan.tencent.com/modelSquare/home/list"><b>Demo</b></a>&nbsp;&nbsp;&nbsp;&nbsp;
16
+ </p>
17
+
18
+ <p align="center">
19
+ <a href="https://github.com/Tencent-Hunyuan/Hunyuan-MT"><b>GITHUB</b></a>
20
+ </p>
21
+
22
+
23
+ ## Model Introduction
24
+
25
+ The Hunyuan Translation Model comprises a translation model, Hunyuan-MT-7B, and an ensemble model, Hunyuan-MT-Chimera. The translation model is used to translate source text into the target language, while the ensemble model integrates multiple translation outputs to produce a higher-quality result. It primarily supports mutual translation among 33 languages, including five ethnic minority languages in China.
26
+
27
+ ### Key Features and Advantages
28
+
29
+ - In the WMT25 competition, the model achieved first place in 30 out of the 31 language categories it participated in.
30
+ - Hunyuan-MT-7B achieves industry-leading performance among models of comparable scale
31
+ - Hunyuan-MT-Chimera-7B is the industry’s first open-source translation ensemble model, elevating translation quality to a new level
32
+ - A comprehensive training framework for translation models has been proposed, spanning from pretrain → cross-lingual pretraining (CPT) → supervised fine-tuning (SFT) → translation enhancement → ensemble refinement, achieving state-of-the-art (SOTA) results for models of similar size
33
+
34
+ ## Related News
35
+ * 2025.9.1 We have open-sourced **Hunyuan-MT-7B** , **Hunyuan-MT-Chimera-7B** on Hugging Face.
36
+ <br>
37
+
38
+
39
+ &nbsp;
40
+
41
+ ## 模型链接
42
+ | Model Name | Description | Download |
43
+ | ----------- | ----------- |-----------
44
+ | Hunyuan-MT-7B | Hunyuan 7B translation model |🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-7B)|
45
+ | Hunyuan-MT-7B-fp8 | Hunyuan 7B translation model,fp8 quant | 🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-7B-fp8)|
46
+ | Hunyuan-MT-Chimera | Hunyuan 7B translation ensemble model | 🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-Chimera-7B)|
47
+ | Hunyuan-MT-Chimera-fp8 | Hunyuan 7B translation ensemble model,fp8 quant | 🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-Chimera-7B-fp8)|
48
+
49
+ ## Prompts
50
+
51
+ ### Prompt Template for ZH<=>XX Translation.
52
+
53
+ 把下面的文本翻译成`<target_language>`,不要额外解释。
54
+
55
+ `<source_text>`
56
+
57
+ ---
58
+
59
+ ### Prompt Template for XX<=>XX Translation, excluding ZH<=>XX.
60
+
61
+ Translate the following segment into `<target_language>`, without additional explanation.
62
+
63
+ `<source_text>`
64
+
65
+
66
+ ### Prompt Template for Hunyuan-MT-Chmeria-7B
67
+
68
+ Analyze the following multiple `<target_language>` translations of the `<source_language>` segment surrounded in triple backticks and generate a single refined `<target_language>` translation. Only output the refined translation, do not explain.
69
+
70
+ The `<source_language>` segment:
71
+ ```<source_text>```
72
+
73
+ The multiple `<target_language>` translations:
74
+ 1. ```<translated_text1>```
75
+ 2. ```<translated_text2>```
76
+ 3. ```<translated_text3>```
77
+ 4. ```<translated_text4>```
78
+ 5. ```<translated_text5>```
79
+ 6. ```<translated_text6>```
80
+
81
+
82
+ &nbsp;
83
+
84
+ ### Use with transformers
85
+ First, please install transformers, recommends v4.55.4
86
+ ```SHELL
87
+ pip install transformers==4.55.4
88
+ ```
89
+
90
+ The following code snippet shows how to use the transformers library to load and apply the model.
91
+
92
+ we use tencent/Hunyuan-MT-7B for example
93
+
94
+ ```python
95
+ from transformers import AutoModelForCausalLM, AutoTokenizer
96
+ import os
97
+
98
+ model_name_or_path = "tencent/Hunyuan-MT-7B"
99
+
100
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
101
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
102
+ messages = [
103
+ {"role": "user", "content": "Translate the following segment into Chinese, without additional explanation.\n\nIt’s on the house."},
104
+ ]
105
+ tokenized_chat = tokenizer.apply_chat_template(
106
+ messages,
107
+ tokenize=True
108
+ add_generation_prompt=False,
109
+ return_tensors="pt"
110
+ )
111
+
112
+ outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
113
+ output_text = tokenizer.decode(outputs[0])
114
+ ```
115
+
116
+ We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
117
+
118
+ ```json
119
+ {
120
+ "top_k": 20,
121
+ "top_p": 0.6,
122
+ "repetition_penalty": 1.05,
123
+ "temperature": 0.7
124
+ }
125
+ ```