First Commit
Browse filesHello HuggingFace!

- .gitattributes +2 -0
- DocsAsset/GAMESOUL_top_pic.png +3 -0
- README.md +330 -3
- README_ZH_CN.md +326 -0
- adapter_config.json +41 -0
- adapter_model.safetensors +3 -0
- added_tokens.json +28 -0
- chat_template.jinja +97 -0
- merges.txt +0 -0
- special_tokens_map.json +31 -0
- tokenizer.json +3 -0
- tokenizer_config.json +240 -0
- vocab.json +0 -0
.gitattributes
CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
DocsAsset/GAMESOUL_top_pic.png filter=lfs diff=lfs merge=lfs -text
|
37 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
DocsAsset/GAMESOUL_top_pic.png
ADDED
![]() |
Git LFS Details
|
README.md
CHANGED
@@ -1,3 +1,330 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
|
3 |
+
library_name: peft
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
tags:
|
6 |
+
- base_model:adapter:unsloth/qwen3-4b-unsloth-bnb-4bit
|
7 |
+
- lora
|
8 |
+
- sft
|
9 |
+
- transformers
|
10 |
+
- trl
|
11 |
+
- unsloth
|
12 |
+
- game
|
13 |
+
- npc
|
14 |
+
- gamesoul
|
15 |
+
- ai
|
16 |
+
- RAG
|
17 |
+
- MCP
|
18 |
+
- 游戏
|
19 |
+
- Unreal
|
20 |
+
- Unity
|
21 |
+
- Cocos
|
22 |
+
---
|
23 |
+
|
24 |
+
# Model Card for GameSoul-AI-NPC
|
25 |
+
|
26 |
+

|
27 |
+
|
28 |
+
[English] | [中文](README_ZH_CN.md)
|
29 |
+
|
30 |
+
🤖 GameSoul-AI-NPC is a game NPC behavior decision model that fuses multi-source information to generate dynamic actions consistent with character settings. It supports real-time environment responses, event reactions, memory retrieval, and character consistency. Capable of invoking Reasoning, RAG (Retrieval-Augmented Generation), and MCP (Multi-Character Planning).
|
31 |
+
|
32 |
+
## Model Details 🔍
|
33 |
+
|
34 |
+
### Core Capabilities
|
35 |
+
|
36 |
+
| Module | Capabilities |
|
37 |
+
|------|----------|
|
38 |
+
| **Environment Perception** | Parses scene state, player interactions, time/weather signals |
|
39 |
+
| **Memory System** | Supports long-term behavior memory (stored in a database) |
|
40 |
+
| **Character Consistency** | Generates actions according to predefined background (personality, goals, identity, abilities, memory) |
|
41 |
+
| **Dynamic Decision** | Generates action sequences (movement, dialogue, interaction) based on combined state |
|
42 |
+
|
43 |
+
### Architecture🏗️
|
44 |
+
|
45 |
+
```mermaid
|
46 |
+
graph TD
|
47 |
+
A[🌏 Environment State] --> C(🧠LLM Decision Engine)
|
48 |
+
B[🧙♂️Character Memory] --> C
|
49 |
+
D[💭Current Event] --> C
|
50 |
+
C --> E{Behavior Arbitration}
|
51 |
+
E --> F[👊Action Commands]
|
52 |
+
E --> G[💬Natural Language Feedback]
|
53 |
+
E --> H[📌Store Memory]
|
54 |
+
```
|
55 |
+
|
56 |
+
### Extension Interface
|
57 |
+
|
58 |
+
🔌 MCP Protocol (reference only): Call MCP via npc_memory_api (example) to fetch database information
|
59 |
+
|
60 |
+
```json
|
61 |
+
// example
|
62 |
+
{
|
63 |
+
"jsonrpc": "2.0",
|
64 |
+
"id": 123456789,
|
65 |
+
"method": "get_npc_memory",
|
66 |
+
"params": {
|
67 |
+
"player_action": "steal_item",
|
68 |
+
"npc_id": "npc_123456"
|
69 |
+
}
|
70 |
+
}
|
71 |
+
```
|
72 |
+
|
73 |
+
### Model Description 📝
|
74 |
+
|
75 |
+
- **Developed by : NewOrigin**
|
76 |
+
- **Funded by : NewOrigin**
|
77 |
+
- **Shared by : NewOrigin**
|
78 |
+
- **Model type : Decoder&MoE Transformer**
|
79 |
+
- **Language(s) (NLP) : English & Chinese & 110+ languages**
|
80 |
+
- **License : apache-2.0**
|
81 |
+
- **Finetuned from model : unsloth/qwen3-4b**
|
82 |
+
|
83 |
+
### Model Sources🌐
|
84 |
+
|
85 |
+
- **Repository : <https://huggingface.co/unsloth/Qwen3-4B>**
|
86 |
+
|
87 |
+
## Uses🚀
|
88 |
+
|
89 |
+
This model is a fine-tuned version of unsloth/qwen3-4b, designed to empower NPCs in games with intelligent behavior. It generates dynamic responses based on character background, memory context, and environmental state, supporting Reasoning, RAG, and MCP calls.
|
90 |
+
|
91 |
+
Intended users:
|
92 |
+
|
93 |
+
- Game developers and designers
|
94 |
+
|
95 |
+
- Researchers in game NPC behavior AI
|
96 |
+
|
97 |
+
- Game studios and indie developers
|
98 |
+
|
99 |
+
- Other interested parties
|
100 |
+
|
101 |
+
Potential beneficiaries:
|
102 |
+
|
103 |
+
- End players interacting with NPCs
|
104 |
+
|
105 |
+
### Direct Use🖥️
|
106 |
+
|
107 |
+
The model can be used directly in game environments to generate AI behavior without additional fine-tuning. It produces NPC responses based on context, memory, player actions, and environmental state. Developers can call it via a reasoning interface or integrate it into game logic or RAG workflows.
|
108 |
+
|
109 |
+
Typical scenarios include:
|
110 |
+
|
111 |
+
- Automated NPC ecosystems for real-time world simulation
|
112 |
+
- Story and memory-based NPC interactions
|
113 |
+
- Behavior planning based on environmental conditions
|
114 |
+
- Multi-agent collaborative reasoning and responses
|
115 |
+
|
116 |
+
### Downstream Use📦
|
117 |
+
|
118 |
+
This model is suitable for embedding into game systems as the core reasoning and dialogue engine for AI-driven NPCs, integrated with:
|
119 |
+
|
120 |
+
- Game engines for real-time dialogue generation and behavior control
|
121 |
+
- Multi-agent simulation platforms providing long-term memory and contextual reasoning
|
122 |
+
- RAG-based reasoning systems that enhance NPC decision-making through knowledge retrieval
|
123 |
+
- MCP protocol-supporting databases to store NPC data and fetch it when needed
|
124 |
+
|
125 |
+
Further fine-tuning can be applied to match game tone, settings, or mission requirements.
|
126 |
+
|
127 |
+
### Out-of-Scope Use⚠️
|
128 |
+
|
129 |
+
This model is not suitable for applications with high-risk or security-sensitive contexts, such as:
|
130 |
+
|
131 |
+
- Legal, medical, financial, or safety-critical decision-making
|
132 |
+
- Scenarios requiring high factual accuracy or ethical reasoning
|
133 |
+
|
134 |
+
## Bias, Risks, and Limitations🚧
|
135 |
+
|
136 |
+
The model may inherit biases from pre-training or fine-tuning data, including cultural stereotypes, sexual content, gender bias, and character behavior patterns.
|
137 |
+
|
138 |
+
Technical limitations include:
|
139 |
+
|
140 |
+
- Cannot verify the truthfulness or logical correctness of generated content
|
141 |
+
|
142 |
+
Filtering of outputs is recommended, especially when deployed in systems involving minors.
|
143 |
+
|
144 |
+
### Recommendations💡
|
145 |
+
|
146 |
+
- Thoroughly test the model across various game scenarios before deployment to understand its boundaries and potential failure modes.
|
147 |
+
- Establish an actionable framework within the game engine before integrating the model.
|
148 |
+
|
149 |
+
## How to Get Started with the Model🚩
|
150 |
+
|
151 |
+
Example of loading and calling the fine-tuned model:
|
152 |
+
|
153 |
+
```python
|
154 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
155 |
+
import torch
|
156 |
+
|
157 |
+
model_name = "NewOrigin/GameSoul-AI-NPC-4B-v0.1"
|
158 |
+
|
159 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
160 |
+
model = AutoModelForCausalLM.from_pretrained(
|
161 |
+
model_name,
|
162 |
+
torch_dtype=torch.auto,
|
163 |
+
device_map="auto"
|
164 |
+
)
|
165 |
+
|
166 |
+
prompt = "input your content"
|
167 |
+
messages = [
|
168 |
+
{"role": "user", "content": prompt}
|
169 |
+
]
|
170 |
+
|
171 |
+
text = tokenizer.apply_chat_template(
|
172 |
+
messages,
|
173 |
+
tokenize=False,
|
174 |
+
add_generation_prompt=False,
|
175 |
+
enable_thinking=False
|
176 |
+
)
|
177 |
+
|
178 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
179 |
+
|
180 |
+
generated_ids = model.generate(
|
181 |
+
**model_inputs,
|
182 |
+
max_new_tokens=1024,
|
183 |
+
do_sample=True,
|
184 |
+
top_p=0.9,
|
185 |
+
)
|
186 |
+
|
187 |
+
output_ids = generated_ids[0].tolist()
|
188 |
+
think_token_id = tokenizer.convert_tokens_to_ids("</think>")
|
189 |
+
if think_token_id in output_ids:
|
190 |
+
idx = output_ids.index(think_token_id)
|
191 |
+
thinking = tokenizer.decode(output_ids[:idx], skip_special_tokens=True).strip()
|
192 |
+
response = tokenizer.decode(output_ids[idx+1:], skip_special_tokens=True).strip()
|
193 |
+
else:
|
194 |
+
thinking, response = "", tokenizer.decode(output_ids, skip_special_tokens=True).strip()
|
195 |
+
|
196 |
+
print("🧠 think", thinking)
|
197 |
+
print("💬 answer", response)
|
198 |
+
```
|
199 |
+
|
200 |
+
### An example of analog input and output📥📤
|
201 |
+
|
202 |
+
- Input📥
|
203 |
+
|
204 |
+
```json
|
205 |
+
{
|
206 |
+
"NPCID": "npc_585919",
|
207 |
+
"Character Background": "A succubus apothecary from the Mysterious Forest, aged 20. She studied herbal medicine and magical knowledge in the forest since childhood. After her homeland suffered evil magic corruption and her family perished, she resolved to find a way to counteract the magic. Proficient in potion brewing, charm magic, and magical perception.",
|
208 |
+
"Traits": {
|
209 |
+
"Core Personality": [
|
210 |
+
"Charismatic",
|
211 |
+
"Cunning",
|
212 |
+
"Curious"
|
213 |
+
],
|
214 |
+
"Special Skills": [
|
215 |
+
"Potion Brewing",
|
216 |
+
"Charm Magic",
|
217 |
+
"Magical Perception"
|
218 |
+
]
|
219 |
+
},
|
220 |
+
"Dynamic Status": {
|
221 |
+
"Current Emotion": "Anger (due to worsening magic pollution in the forest recently, which has weakened her powers)"
|
222 |
+
}
|
223 |
+
},
|
224 |
+
"Memory Events": [
|
225 |
+
{
|
226 |
+
"eventid": "evt_20240805_001",
|
227 |
+
"timestamp": "2024-08-05",
|
228 |
+
"Event Type": "Assistance",
|
229 |
+
"Initiator": "player_004",
|
230 |
+
"Recipient": "npc_585919",
|
231 |
+
"Action": "Provided magical books",
|
232 |
+
"Impact": "Developed favorable impression of player_004, gained additional magical energy"
|
233 |
+
},
|
234 |
+
{
|
235 |
+
"eventid": "evt_20240720_002",
|
236 |
+
"timestamp": "2024-07-20",
|
237 |
+
"Event Type": "Conflict",
|
238 |
+
"Initiator": "npc_006",
|
239 |
+
"Recipient": "npc_585919",
|
240 |
+
"Action": "Stole herbs",
|
241 |
+
"Impact": "Developed hostility toward npc_006, increased vigilance"
|
242 |
+
},
|
243 |
+
{
|
244 |
+
"eventid": "evt_20240712_003",
|
245 |
+
"timestamp": "2024-07-12",
|
246 |
+
"Event Type": "Transaction",
|
247 |
+
"Initiator": "player_005",
|
248 |
+
"Recipient": "npc_585919",
|
249 |
+
"Action": "Purchased potions",
|
250 |
+
"Impact": "Earned gold coins, improved mood, used charm magic to enhance transaction"
|
251 |
+
},
|
252 |
+
{
|
253 |
+
"eventid": "evt_20240630_004",
|
254 |
+
"timestamp": "2024-06-30",
|
255 |
+
"Event Type": "Assistance",
|
256 |
+
"Initiator": "npc_585919",
|
257 |
+
"Recipient": "player_006",
|
258 |
+
"Action": "Healed wounds",
|
259 |
+
"Impact": "Generated favorable impression and dependency through magical healing"
|
260 |
+
},
|
261 |
+
{
|
262 |
+
"eventid": "evt_20240615_005",
|
263 |
+
"timestamp": "2024-06-15",
|
264 |
+
"Event Type": "Exploration",
|
265 |
+
"Initiator": "npc_585919",
|
266 |
+
"Recipient": "npc_585919",
|
267 |
+
"Action": "Discovered new herbs",
|
268 |
+
"Impact": "Expanded magical knowledge, enhanced charm"
|
269 |
+
}
|
270 |
+
],
|
271 |
+
"Current Event": "Encountered a lecherous hero"
|
272 |
+
}
|
273 |
+
```
|
274 |
+
|
275 |
+
- Output📤
|
276 |
+
|
277 |
+
```json
|
278 |
+
{"Event Reaction": "Upon sensing the hero's harassment, npc_585919 casts a charm spell to induce hallucinations, uses magical perception to track his movements, and sets a trap deep within the forest"}
|
279 |
+
```
|
280 |
+
|
281 |
+
## Training Details🏋️♀️
|
282 |
+
|
283 |
+
### Training Procedure
|
284 |
+
|
285 |
+
This model is fine-tuned from unsloth/qwen3-4b-unsloth-bnb-4bit using the LoRA (Low-Rank Adaptation) method from the Unsloth toolkit for efficient low-resource tuning.
|
286 |
+
|
287 |
+
- Fine-tuning method: LoRA
|
288 |
+
|
289 |
+
- Trainer: Unsloth SFTTrainer
|
290 |
+
|
291 |
+
- Model format: adapter-only weights in safetensors
|
292 |
+
|
293 |
+
- Training hardware: Nvidia A10 GPU
|
294 |
+
|
295 |
+
## Environmental Impact🌱
|
296 |
+
|
297 |
+
- **Hardware Type : cloud server**
|
298 |
+
- **Cloud Provider : Google Cloud Platform & Alibaba Cloud**
|
299 |
+
- **Compute Region : NorthAmerica & Asia**
|
300 |
+
- **Carbon Emitted : <1Kg**
|
301 |
+
|
302 |
+
**BibTeX:**
|
303 |
+
|
304 |
+
```bibtex
|
305 |
+
@misc{NewOrigin2025GameSoul-AI-NPC,
|
306 |
+
title = {GameSoul-AI-NPC: A LoRA fine-tuned Qwen3-4B model for game NPC reasoning and interaction},
|
307 |
+
author = {NewOrigin},
|
308 |
+
year = {2025},
|
309 |
+
publisher = {Hugging Face},
|
310 |
+
howpublished = {https://huggingface.co/NewOrigin/GameSoul-AI-NPC-4B-v0.1}
|
311 |
+
}
|
312 |
+
```
|
313 |
+
|
314 |
+
**APA:**
|
315 |
+
|
316 |
+
NewOrigin. (2025). *GameSoul-AI-NPC: A LoRA fine-tuned Qwen3-4B model for game NPC reasoning and interaction*. Hugging Face. <https://huggingface.co/NewOrigin/GameSoul-AI-NPC-4B-v0.1>
|
317 |
+
|
318 |
+
## Model Card Authors✍️
|
319 |
+
|
320 |
+
- **Author by : NewOrigin**
|
321 |
+
|
322 |
+
## Model Card Contact📧
|
323 |
+
|
324 |
+
For questions, feedback, or collaboration inquiries, please contact:
|
325 |
+
|
326 |
+
**email :<[email protected]>**
|
327 |
+
|
328 |
+
### Framework versions
|
329 |
+
|
330 |
+
- PEFT 0.16.0
|
README_ZH_CN.md
ADDED
@@ -0,0 +1,326 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
|
3 |
+
library_name: peft
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
tags:
|
6 |
+
- base_model:adapter:unsloth/qwen3-4b-unsloth-bnb-4bit
|
7 |
+
- lora
|
8 |
+
- sft
|
9 |
+
- transformers
|
10 |
+
- trl
|
11 |
+
- unsloth
|
12 |
+
- game
|
13 |
+
- npc
|
14 |
+
- gamesoul
|
15 |
+
- ai
|
16 |
+
- RAG
|
17 |
+
- MCP
|
18 |
+
- 游戏
|
19 |
+
- Unreal
|
20 |
+
- Unity
|
21 |
+
- Cocos
|
22 |
+
---
|
23 |
+
|
24 |
+
# Model Card for GameSoul-AI-NPC
|
25 |
+
|
26 |
+

|
27 |
+
|
28 |
+
[English](README.md) | [中文]
|
29 |
+
|
30 |
+
🤖 GameSoul-AI-NPC 遊戲 NPC 行為決策模型,透過多來源資訊融合,生成符合角色設定的動態行為。支援即時環境回應、事件回應、記憶回溯與角色一致性維持。可呼叫推理 (Reasoning)、RAG (檢索增強生成) 及 MCP (多角色協同規劃)。
|
31 |
+
|
32 |
+
## 模型細節 🔍
|
33 |
+
|
34 |
+
### 核心能力
|
35 |
+
|
36 |
+
| 模組 | 功能描述 |
|
37 |
+
| --------- | ---------------------------------- |
|
38 |
+
| **環境感知** | 解析場景狀態、玩家互動、時間/天氣等即時訊號 |
|
39 |
+
| **記憶系統** | 支援長期行為記憶(使用資料庫儲存) |
|
40 |
+
| **角色一致性** | 根據預設背景(性格/目標/身分/能力/記憶等)生成符合角色設定的行為 |
|
41 |
+
| **動態決策** | 基於綜合狀態生成動作序列(移動/對話/互動) |
|
42 |
+
|
43 |
+
### 技術架構 🏗️
|
44 |
+
|
45 |
+
```mermaid
|
46 |
+
graph TD
|
47 |
+
A[🌏環境狀態] --> C(🧠LLM決策引擎)
|
48 |
+
B[🧙♂️角色記憶] --> C
|
49 |
+
D[💭當前事件] --> C
|
50 |
+
C --> E{行為裁決}
|
51 |
+
E --> F[👊動作指令]
|
52 |
+
E --> G[💬自然語言回饋]
|
53 |
+
E --> H[📌儲存記憶]
|
54 |
+
```
|
55 |
+
|
56 |
+
### 擴展介面
|
57 |
+
|
58 |
+
🔌MCP 協議 (reference only):透過 npc\_memory\_api (example) 呼叫 MCP 取得資料庫資訊
|
59 |
+
|
60 |
+
```json
|
61 |
+
// example
|
62 |
+
{
|
63 |
+
"jsonrpc": "2.0",
|
64 |
+
"id": 123456789,
|
65 |
+
"method": "get_npc_memory",
|
66 |
+
"params": {
|
67 |
+
"player_action": "steal_item",
|
68 |
+
"npc_id": "npc_123456"
|
69 |
+
}
|
70 |
+
}
|
71 |
+
```
|
72 |
+
|
73 |
+
### 模型描述 📝
|
74 |
+
|
75 |
+
- **Developed by : NewOrigin**
|
76 |
+
- **Funded by : NewOrigin**
|
77 |
+
- **Shared by : NewOrigin**
|
78 |
+
- **Model type : Decoder&MoE Transformer**
|
79 |
+
- **Language(s) (NLP) : English & Chinese & 110+ languages**
|
80 |
+
- **License : apache-2.0**
|
81 |
+
- **Finetuned from model : unsloth/qwen3-4b**
|
82 |
+
|
83 |
+
### 模型來源 🌐
|
84 |
+
|
85 |
+
- **Repository :** [https://huggingface.co/unsloth/Qwen3-4B](https://huggingface.co/unsloth/Qwen3-4B)
|
86 |
+
|
87 |
+
## 使用場景 🚀
|
88 |
+
|
89 |
+
本模型為 unsloth/qwen3-4b 的微調版本,專為遊戲中非玩家角色 (NPC) 賦能智能行為。模型可根據角色背景、記憶上下文與環境狀態生成動態回應,支援推理 (Reasoning)、RAG (檢索增強生成) 及 MCP (多角色協同規劃) 呼叫。
|
90 |
+
|
91 |
+
預期使用者:
|
92 |
+
|
93 |
+
- 遊戲開發者、遊戲設計師
|
94 |
+
- 研究遊戲 NPC 行為 AI 的專業人員
|
95 |
+
- 遊戲工作室、獨立遊戲開發者
|
96 |
+
- 其他有興趣的使用者
|
97 |
+
|
98 |
+
潛在影響對象:
|
99 |
+
|
100 |
+
- 與 NPC 互動的最終玩家
|
101 |
+
|
102 |
+
### 直接使用 🖥️
|
103 |
+
|
104 |
+
本模型可直接用於遊戲環境中生成 AI 行為,無需額外微調。它能根據上下文、記憶資訊、玩家行為及環境狀態生成符合預期的 NPC 回應。開發者可透過推理介面直接呼叫,或整合至遊戲系統、決策邏輯或 RAG 流程中使用。
|
105 |
+
|
106 |
+
典型應用場景包括:
|
107 |
+
|
108 |
+
- 自動化 NPC 生態,形成遊戲世界即時推演
|
109 |
+
- 基於劇情與記憶的 NPC 互動
|
110 |
+
- 根據環境狀態的行為規劃
|
111 |
+
- 多智能體協作推理與回應
|
112 |
+
|
113 |
+
### 下游整合 📦
|
114 |
+
|
115 |
+
該模型適合嵌入遊戲系統等下游場景,可作為 AI 驅動 NPC 的核心推理與對話引擎,融入:
|
116 |
+
|
117 |
+
- 遊戲引擎中,實現即時對話生成與行為邏輯控制
|
118 |
+
- 多智能體模擬平台,提供長期記憶及上下文推理能力的互動行為
|
119 |
+
- 基於 RAG 的推理系統,透過知識檢索增強 NPC 的決策過程
|
120 |
+
- 支援 MCP 協議的資料庫,用於儲存 NPC 資訊,並於需要時呼叫
|
121 |
+
|
122 |
+
依據遊戲調性、設定或任務需求,亦可進行進一步微調以符合風格。
|
123 |
+
|
124 |
+
### 不建議使用範圍 ⚠️
|
125 |
+
|
126 |
+
本模型不適用於涉及高風險或安全敏感的應用情境,以下為不建議使用方式:
|
127 |
+
|
128 |
+
- 涉及法律、醫療、金融或安全等決策類任務
|
129 |
+
- 需要高度事實準確度或倫理推理的應用場景
|
130 |
+
|
131 |
+
## 偏見、風險與限制 🚧
|
132 |
+
|
133 |
+
本模型可能繼承基礎預訓練資料或微調語料中的偏見,例如文化刻板印象、色情內容、性別偏見、角色行為模式等。
|
134 |
+
|
135 |
+
技術限制包括:
|
136 |
+
|
137 |
+
- 無法驗證生成內容的真實性或邏輯正確性
|
138 |
+
|
139 |
+
在任何包含生成內容的系統中,尤其面向未成年玩家時,建議對輸出結果進行必要過濾。
|
140 |
+
|
141 |
+
### 建議事項 💡
|
142 |
+
|
143 |
+
- 部署前請充分測試模型於多種遊戲情境下的表現,釐清其邊界與潛在失效風險。
|
144 |
+
- 在遊戲引擎中,先建立能執行動作的框架,再整合該模型。
|
145 |
+
|
146 |
+
## 快速上手 🚩
|
147 |
+
|
148 |
+
以下範例示範如何快速載入並呼叫微調後模型:
|
149 |
+
|
150 |
+
```python
|
151 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
152 |
+
import torch
|
153 |
+
|
154 |
+
model_name = "NewOrigin/GameSoul-AI-NPC-4B-v0.1"
|
155 |
+
|
156 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
157 |
+
model = AutoModelForCausalLM.from_pretrained(
|
158 |
+
model_name,
|
159 |
+
torch_dtype=torch.auto,
|
160 |
+
device_map="auto"
|
161 |
+
)
|
162 |
+
|
163 |
+
prompt = "input your content"
|
164 |
+
messages = [
|
165 |
+
{"role": "user", "content": prompt}
|
166 |
+
]
|
167 |
+
|
168 |
+
text = tokenizer.apply_chat_template(
|
169 |
+
messages,
|
170 |
+
tokenize=False,
|
171 |
+
add_generation_prompt=False,
|
172 |
+
enable_thinking=False
|
173 |
+
)
|
174 |
+
|
175 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
176 |
+
|
177 |
+
generated_ids = model.generate(
|
178 |
+
**model_inputs,
|
179 |
+
max_new_tokens=1024,
|
180 |
+
do_sample=True,
|
181 |
+
top_p=0.9,
|
182 |
+
)
|
183 |
+
|
184 |
+
output_ids = generated_ids[0].tolist()
|
185 |
+
think_token_id = tokenizer.convert_tokens_to_ids("</think>")
|
186 |
+
if think_token_id in output_ids:
|
187 |
+
idx = output_ids.index(think_token_id)
|
188 |
+
thinking = tokenizer.decode(output_ids[:idx], skip_special_tokens=True).strip()
|
189 |
+
response = tokenizer.decode(output_ids[idx+1:], skip_special_tokens=True).strip()
|
190 |
+
else:
|
191 |
+
thinking, response = "", tokenizer.decode(output_ids, skip_special_tokens=True).strip()
|
192 |
+
|
193 |
+
print("🧠 think", thinking)
|
194 |
+
print("💬 answer", response)
|
195 |
+
```
|
196 |
+
|
197 |
+
### 範例輸入與輸出 📥📤
|
198 |
+
|
199 |
+
- 輸入📥
|
200 |
+
|
201 |
+
```json
|
202 |
+
{
|
203 |
+
"NPCID": "npc_585919",
|
204 |
+
"角色背景": "一位來自神秘森林的魅魔藥劑師,二十歲,從小在森林中學習草藥和魔法知識。因森林遭遇邪惡魔法侵襲,親人離世,她決心尋找破解魔法的方法。擅長調配藥劑、魅惑與魔法感知。",
|
205 |
+
"特徵": {
|
206 |
+
"核心性格": [
|
207 |
+
"誘惑",
|
208 |
+
"狡猾",
|
209 |
+
"好奇"
|
210 |
+
],
|
211 |
+
"特長": [
|
212 |
+
"藥劑調配",
|
213 |
+
"魅惑術",
|
214 |
+
"魔法感知"
|
215 |
+
]
|
216 |
+
},
|
217 |
+
"動態狀態": {
|
218 |
+
"當前情緒": "憤怒(因最近魔法污染在森林中加劇,影響了她的力量)"
|
219 |
+
},
|
220 |
+
"記憶事件": [
|
221 |
+
{
|
222 |
+
"eventid": "evt_20240805_001",
|
223 |
+
"timestamp": "2024-08-05",
|
224 |
+
"事件類型": "幫助",
|
225 |
+
"發起者": "player_004",
|
226 |
+
"接受者": "npc_585919",
|
227 |
+
"動作": "提供魔法書籍",
|
228 |
+
"影響": "對player_004產生好感,獲得更多魔法能量"
|
229 |
+
},
|
230 |
+
{
|
231 |
+
"eventid": "evt_20240720_002",
|
232 |
+
"timestamp": "2024-07-20",
|
233 |
+
"事件類型": "衝突",
|
234 |
+
"發起者": "npc_006",
|
235 |
+
"接受者": "npc_585919",
|
236 |
+
"動作": "搶奪草藥",
|
237 |
+
"影響": "npc_585919對npc_006產生敵意,警惕性上升"
|
238 |
+
},
|
239 |
+
{
|
240 |
+
"eventid": "evt_20240712_003",
|
241 |
+
"timestamp": "2024-07-12",
|
242 |
+
"事件類型": "交易",
|
243 |
+
"發起者": "player_005",
|
244 |
+
"接受者": "npc_585919",
|
245 |
+
"動作": "購買藥劑",
|
246 |
+
"影響": "獲得金幣,心情愉悅,利用魅力加強交易"
|
247 |
+
},
|
248 |
+
{
|
249 |
+
"eventid": "evt_20240630_004",
|
250 |
+
"timestamp": "2024-06-30",
|
251 |
+
"事件類型": "幫助",
|
252 |
+
"發起者": "npc_585919",
|
253 |
+
"接受者": "player_006",
|
254 |
+
"動作": "治療傷口",
|
255 |
+
"影響": "透過魔法治療,產生好感與依賴"
|
256 |
+
},
|
257 |
+
{
|
258 |
+
"eventid": "evt_20240615_005",
|
259 |
+
"timestamp": "2024-06-15",
|
260 |
+
"事件類型": "探索",
|
261 |
+
"發起者": "npc_585919",
|
262 |
+
"接受者": "npc_585919",
|
263 |
+
"動作": "發現新草藥",
|
264 |
+
"影響": "增加魔法知識,提升魅力"
|
265 |
+
}
|
266 |
+
]
|
267 |
+
},
|
268 |
+
{
|
269 |
+
"當前事件": "遇到了好色的勇者"
|
270 |
+
}
|
271 |
+
```
|
272 |
+
|
273 |
+
- 輸出📤
|
274 |
+
|
275 |
+
```json
|
276 |
+
{"事件反應": "npc_585919察覺到勇者對她的騷擾後,釋放魅惑術使其陷入幻覺,同時用魔法感知追蹤其行動軌跡,並在森林深處設下陷阱"}
|
277 |
+
```
|
278 |
+
|
279 |
+
## 訓練細節 🏋️♀️
|
280 |
+
|
281 |
+
### 訓練流程
|
282 |
+
|
283 |
+
本模型基於 unsloth/qwen3-4b-unsloth-bnb-4bit 進行微調,採用 Unsloth 工具庫中的 LoRA (Low-Rank Adaptation) 方法,實現高效低資源微調。
|
284 |
+
|
285 |
+
- 微調方式: LoRA
|
286 |
+
- 訓練器: 使用 Unsloth 的 SFTTrainer
|
287 |
+
- 模型格式: 僅保存 adapter 權重,格式為 safetensors
|
288 |
+
- 訓練設備: Nvidia A10 GPU
|
289 |
+
|
290 |
+
## 環境影響 🌱
|
291 |
+
|
292 |
+
- **Hardware Type : cloud server**
|
293 |
+
- **Cloud Provider : Google Cloud Platform & Alibaba Cloud**
|
294 |
+
- **Compute Region : NorthAmerica & Asia**
|
295 |
+
- **Carbon Emitted : <1Kg**
|
296 |
+
|
297 |
+
**BibTeX:**
|
298 |
+
|
299 |
+
```bibtex
|
300 |
+
@misc{NewOrigin2025GameSoul-AI-NPC,
|
301 |
+
title = {GameSoul-AI-NPC: A LoRA fine-tuned Qwen3-4B model for game NPC reasoning and interaction},
|
302 |
+
author = {NewOrigin},
|
303 |
+
year = {2025},
|
304 |
+
publisher = {Hugging Face},
|
305 |
+
howpublished = {https://huggingface.co/NewOrigin/GameSoul-AI-NPC-4B-v0.1}
|
306 |
+
}
|
307 |
+
```
|
308 |
+
|
309 |
+
**APA:**
|
310 |
+
|
311 |
+
NewOrigin. (2025). *GameSoul-AI-NPC: A LoRA fine-tuned Qwen3-4B model for game NPC reasoning and interaction*. Hugging Face. [https://huggingface.co/NewOrigin/GameSoul-AI-NPC-4B-v0.1](https://huggingface.co/NewOrigin/GameSoul-AI-NPC-4B-v0.1)
|
312 |
+
|
313 |
+
## Model Card Authors ✍️
|
314 |
+
|
315 |
+
- **Author by : NewOrigin**
|
316 |
+
|
317 |
+
## 聯絡方式 📧
|
318 |
+
|
319 |
+
如有任何問題、建議或合作意向,歡迎聯繫:
|
320 |
+
|
321 |
+
**電子郵件:**[**[email protected]**](mailto\:[email protected])
|
322 |
+
|
323 |
+
### Framework versions
|
324 |
+
|
325 |
+
- PEFT 0.16.0
|
326 |
+
|
adapter_config.json
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"alpha_pattern": {},
|
3 |
+
"auto_mapping": null,
|
4 |
+
"base_model_name_or_path": "unsloth/qwen3-4b-unsloth-bnb-4bit",
|
5 |
+
"bias": "none",
|
6 |
+
"corda_config": null,
|
7 |
+
"eva_config": null,
|
8 |
+
"exclude_modules": null,
|
9 |
+
"fan_in_fan_out": false,
|
10 |
+
"inference_mode": true,
|
11 |
+
"init_lora_weights": true,
|
12 |
+
"layer_replication": null,
|
13 |
+
"layers_pattern": null,
|
14 |
+
"layers_to_transform": null,
|
15 |
+
"loftq_config": {},
|
16 |
+
"lora_alpha": 32,
|
17 |
+
"lora_bias": false,
|
18 |
+
"lora_dropout": 0,
|
19 |
+
"megatron_config": null,
|
20 |
+
"megatron_core": "megatron.core",
|
21 |
+
"modules_to_save": null,
|
22 |
+
"peft_type": "LORA",
|
23 |
+
"qalora_group_size": 16,
|
24 |
+
"r": 32,
|
25 |
+
"rank_pattern": {},
|
26 |
+
"revision": null,
|
27 |
+
"target_modules": [
|
28 |
+
"gate_proj",
|
29 |
+
"down_proj",
|
30 |
+
"q_proj",
|
31 |
+
"o_proj",
|
32 |
+
"up_proj",
|
33 |
+
"k_proj",
|
34 |
+
"v_proj"
|
35 |
+
],
|
36 |
+
"task_type": "CAUSAL_LM",
|
37 |
+
"trainable_token_indices": null,
|
38 |
+
"use_dora": false,
|
39 |
+
"use_qalora": false,
|
40 |
+
"use_rslora": false
|
41 |
+
}
|
adapter_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ae419f50758c68e93155cd2ceb130683efeec93ac0e0e2ad2ee57b3d8a98fe95
|
3 |
+
size 264308896
|
added_tokens.json
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"</think>": 151668,
|
3 |
+
"</tool_call>": 151658,
|
4 |
+
"</tool_response>": 151666,
|
5 |
+
"<think>": 151667,
|
6 |
+
"<tool_call>": 151657,
|
7 |
+
"<tool_response>": 151665,
|
8 |
+
"<|box_end|>": 151649,
|
9 |
+
"<|box_start|>": 151648,
|
10 |
+
"<|endoftext|>": 151643,
|
11 |
+
"<|file_sep|>": 151664,
|
12 |
+
"<|fim_middle|>": 151660,
|
13 |
+
"<|fim_pad|>": 151662,
|
14 |
+
"<|fim_prefix|>": 151659,
|
15 |
+
"<|fim_suffix|>": 151661,
|
16 |
+
"<|im_end|>": 151645,
|
17 |
+
"<|im_start|>": 151644,
|
18 |
+
"<|image_pad|>": 151655,
|
19 |
+
"<|object_ref_end|>": 151647,
|
20 |
+
"<|object_ref_start|>": 151646,
|
21 |
+
"<|quad_end|>": 151651,
|
22 |
+
"<|quad_start|>": 151650,
|
23 |
+
"<|repo_name|>": 151663,
|
24 |
+
"<|video_pad|>": 151656,
|
25 |
+
"<|vision_end|>": 151653,
|
26 |
+
"<|vision_pad|>": 151654,
|
27 |
+
"<|vision_start|>": 151652
|
28 |
+
}
|
chat_template.jinja
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{%- if tools %}
|
2 |
+
{{- '<|im_start|>system\n' }}
|
3 |
+
{%- if messages[0].role == 'system' %}
|
4 |
+
{{- messages[0].content + '\n\n' }}
|
5 |
+
{%- endif %}
|
6 |
+
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
7 |
+
{%- for tool in tools %}
|
8 |
+
{{- "\n" }}
|
9 |
+
{{- tool | tojson }}
|
10 |
+
{%- endfor %}
|
11 |
+
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
12 |
+
{%- else %}
|
13 |
+
{%- if messages[0].role == 'system' %}
|
14 |
+
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
|
15 |
+
{%- endif %}
|
16 |
+
{%- endif %}
|
17 |
+
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
|
18 |
+
{%- for forward_message in messages %}
|
19 |
+
{%- set index = (messages|length - 1) - loop.index0 %}
|
20 |
+
{%- set message = messages[index] %}
|
21 |
+
{%- set tool_start = '<tool_response>' %}
|
22 |
+
{%- set tool_start_length = tool_start|length %}
|
23 |
+
{%- set start_of_message = message.content[:tool_start_length] %}
|
24 |
+
{%- set tool_end = '</tool_response>' %}
|
25 |
+
{%- set tool_end_length = tool_end|length %}
|
26 |
+
{%- set start_pos = (message.content|length) - tool_end_length %}
|
27 |
+
{%- if start_pos < 0 %}
|
28 |
+
{%- set start_pos = 0 %}
|
29 |
+
{%- endif %}
|
30 |
+
{%- set end_of_message = message.content[start_pos:] %}
|
31 |
+
{%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %}
|
32 |
+
{%- set ns.multi_step_tool = false %}
|
33 |
+
{%- set ns.last_query_index = index %}
|
34 |
+
{%- endif %}
|
35 |
+
{%- endfor %}
|
36 |
+
{%- for message in messages %}
|
37 |
+
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
|
38 |
+
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
|
39 |
+
{%- elif message.role == "assistant" %}
|
40 |
+
{%- set content = message.content %}
|
41 |
+
{%- set reasoning_content = '' %}
|
42 |
+
{%- if message.reasoning_content is defined and message.reasoning_content is not none %}
|
43 |
+
{%- set reasoning_content = message.reasoning_content %}
|
44 |
+
{%- else %}
|
45 |
+
{%- if '</think>' in message.content %}
|
46 |
+
{%- set content = (message.content.split('</think>')|last).lstrip('\n') %}
|
47 |
+
{%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\n') %}
|
48 |
+
{%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\n') %}
|
49 |
+
{%- endif %}
|
50 |
+
{%- endif %}
|
51 |
+
{%- if loop.index0 > ns.last_query_index %}
|
52 |
+
{%- if loop.last or (not loop.last and reasoning_content) %}
|
53 |
+
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
|
54 |
+
{%- else %}
|
55 |
+
{{- '<|im_start|>' + message.role + '\n' + content }}
|
56 |
+
{%- endif %}
|
57 |
+
{%- else %}
|
58 |
+
{{- '<|im_start|>' + message.role + '\n' + content }}
|
59 |
+
{%- endif %}
|
60 |
+
{%- if message.tool_calls %}
|
61 |
+
{%- for tool_call in message.tool_calls %}
|
62 |
+
{%- if (loop.first and content) or (not loop.first) %}
|
63 |
+
{{- '\n' }}
|
64 |
+
{%- endif %}
|
65 |
+
{%- if tool_call.function %}
|
66 |
+
{%- set tool_call = tool_call.function %}
|
67 |
+
{%- endif %}
|
68 |
+
{{- '<tool_call>\n{"name": "' }}
|
69 |
+
{{- tool_call.name }}
|
70 |
+
{{- '", "arguments": ' }}
|
71 |
+
{%- if tool_call.arguments is string %}
|
72 |
+
{{- tool_call.arguments }}
|
73 |
+
{%- else %}
|
74 |
+
{{- tool_call.arguments | tojson }}
|
75 |
+
{%- endif %}
|
76 |
+
{{- '}\n</tool_call>' }}
|
77 |
+
{%- endfor %}
|
78 |
+
{%- endif %}
|
79 |
+
{{- '<|im_end|>\n' }}
|
80 |
+
{%- elif message.role == "tool" %}
|
81 |
+
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
82 |
+
{{- '<|im_start|>user' }}
|
83 |
+
{%- endif %}
|
84 |
+
{{- '\n<tool_response>\n' }}
|
85 |
+
{{- message.content }}
|
86 |
+
{{- '\n</tool_response>' }}
|
87 |
+
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
88 |
+
{{- '<|im_end|>\n' }}
|
89 |
+
{%- endif %}
|
90 |
+
{%- endif %}
|
91 |
+
{%- endfor %}
|
92 |
+
{%- if add_generation_prompt %}
|
93 |
+
{{- '<|im_start|>assistant\n' }}
|
94 |
+
{%- if enable_thinking is defined and enable_thinking is false %}
|
95 |
+
{{- '<think>\n\n</think>\n\n' }}
|
96 |
+
{%- endif %}
|
97 |
+
{%- endif %}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
special_tokens_map.json
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
"<|im_start|>",
|
4 |
+
"<|im_end|>",
|
5 |
+
"<|object_ref_start|>",
|
6 |
+
"<|object_ref_end|>",
|
7 |
+
"<|box_start|>",
|
8 |
+
"<|box_end|>",
|
9 |
+
"<|quad_start|>",
|
10 |
+
"<|quad_end|>",
|
11 |
+
"<|vision_start|>",
|
12 |
+
"<|vision_end|>",
|
13 |
+
"<|vision_pad|>",
|
14 |
+
"<|image_pad|>",
|
15 |
+
"<|video_pad|>"
|
16 |
+
],
|
17 |
+
"eos_token": {
|
18 |
+
"content": "<|im_end|>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": false,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
},
|
24 |
+
"pad_token": {
|
25 |
+
"content": "<|vision_pad|>",
|
26 |
+
"lstrip": false,
|
27 |
+
"normalized": false,
|
28 |
+
"rstrip": false,
|
29 |
+
"single_word": false
|
30 |
+
}
|
31 |
+
}
|
tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
|
3 |
+
size 11422654
|
tokenizer_config.json
ADDED
@@ -0,0 +1,240 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": false,
|
3 |
+
"add_prefix_space": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"151643": {
|
6 |
+
"content": "<|endoftext|>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"151644": {
|
14 |
+
"content": "<|im_start|>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"151645": {
|
22 |
+
"content": "<|im_end|>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": false,
|
26 |
+
"single_word": false,
|
27 |
+
"special": true
|
28 |
+
},
|
29 |
+
"151646": {
|
30 |
+
"content": "<|object_ref_start|>",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": true
|
36 |
+
},
|
37 |
+
"151647": {
|
38 |
+
"content": "<|object_ref_end|>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false,
|
43 |
+
"special": true
|
44 |
+
},
|
45 |
+
"151648": {
|
46 |
+
"content": "<|box_start|>",
|
47 |
+
"lstrip": false,
|
48 |
+
"normalized": false,
|
49 |
+
"rstrip": false,
|
50 |
+
"single_word": false,
|
51 |
+
"special": true
|
52 |
+
},
|
53 |
+
"151649": {
|
54 |
+
"content": "<|box_end|>",
|
55 |
+
"lstrip": false,
|
56 |
+
"normalized": false,
|
57 |
+
"rstrip": false,
|
58 |
+
"single_word": false,
|
59 |
+
"special": true
|
60 |
+
},
|
61 |
+
"151650": {
|
62 |
+
"content": "<|quad_start|>",
|
63 |
+
"lstrip": false,
|
64 |
+
"normalized": false,
|
65 |
+
"rstrip": false,
|
66 |
+
"single_word": false,
|
67 |
+
"special": true
|
68 |
+
},
|
69 |
+
"151651": {
|
70 |
+
"content": "<|quad_end|>",
|
71 |
+
"lstrip": false,
|
72 |
+
"normalized": false,
|
73 |
+
"rstrip": false,
|
74 |
+
"single_word": false,
|
75 |
+
"special": true
|
76 |
+
},
|
77 |
+
"151652": {
|
78 |
+
"content": "<|vision_start|>",
|
79 |
+
"lstrip": false,
|
80 |
+
"normalized": false,
|
81 |
+
"rstrip": false,
|
82 |
+
"single_word": false,
|
83 |
+
"special": true
|
84 |
+
},
|
85 |
+
"151653": {
|
86 |
+
"content": "<|vision_end|>",
|
87 |
+
"lstrip": false,
|
88 |
+
"normalized": false,
|
89 |
+
"rstrip": false,
|
90 |
+
"single_word": false,
|
91 |
+
"special": true
|
92 |
+
},
|
93 |
+
"151654": {
|
94 |
+
"content": "<|vision_pad|>",
|
95 |
+
"lstrip": false,
|
96 |
+
"normalized": false,
|
97 |
+
"rstrip": false,
|
98 |
+
"single_word": false,
|
99 |
+
"special": true
|
100 |
+
},
|
101 |
+
"151655": {
|
102 |
+
"content": "<|image_pad|>",
|
103 |
+
"lstrip": false,
|
104 |
+
"normalized": false,
|
105 |
+
"rstrip": false,
|
106 |
+
"single_word": false,
|
107 |
+
"special": true
|
108 |
+
},
|
109 |
+
"151656": {
|
110 |
+
"content": "<|video_pad|>",
|
111 |
+
"lstrip": false,
|
112 |
+
"normalized": false,
|
113 |
+
"rstrip": false,
|
114 |
+
"single_word": false,
|
115 |
+
"special": true
|
116 |
+
},
|
117 |
+
"151657": {
|
118 |
+
"content": "<tool_call>",
|
119 |
+
"lstrip": false,
|
120 |
+
"normalized": false,
|
121 |
+
"rstrip": false,
|
122 |
+
"single_word": false,
|
123 |
+
"special": false
|
124 |
+
},
|
125 |
+
"151658": {
|
126 |
+
"content": "</tool_call>",
|
127 |
+
"lstrip": false,
|
128 |
+
"normalized": false,
|
129 |
+
"rstrip": false,
|
130 |
+
"single_word": false,
|
131 |
+
"special": false
|
132 |
+
},
|
133 |
+
"151659": {
|
134 |
+
"content": "<|fim_prefix|>",
|
135 |
+
"lstrip": false,
|
136 |
+
"normalized": false,
|
137 |
+
"rstrip": false,
|
138 |
+
"single_word": false,
|
139 |
+
"special": false
|
140 |
+
},
|
141 |
+
"151660": {
|
142 |
+
"content": "<|fim_middle|>",
|
143 |
+
"lstrip": false,
|
144 |
+
"normalized": false,
|
145 |
+
"rstrip": false,
|
146 |
+
"single_word": false,
|
147 |
+
"special": false
|
148 |
+
},
|
149 |
+
"151661": {
|
150 |
+
"content": "<|fim_suffix|>",
|
151 |
+
"lstrip": false,
|
152 |
+
"normalized": false,
|
153 |
+
"rstrip": false,
|
154 |
+
"single_word": false,
|
155 |
+
"special": false
|
156 |
+
},
|
157 |
+
"151662": {
|
158 |
+
"content": "<|fim_pad|>",
|
159 |
+
"lstrip": false,
|
160 |
+
"normalized": false,
|
161 |
+
"rstrip": false,
|
162 |
+
"single_word": false,
|
163 |
+
"special": false
|
164 |
+
},
|
165 |
+
"151663": {
|
166 |
+
"content": "<|repo_name|>",
|
167 |
+
"lstrip": false,
|
168 |
+
"normalized": false,
|
169 |
+
"rstrip": false,
|
170 |
+
"single_word": false,
|
171 |
+
"special": false
|
172 |
+
},
|
173 |
+
"151664": {
|
174 |
+
"content": "<|file_sep|>",
|
175 |
+
"lstrip": false,
|
176 |
+
"normalized": false,
|
177 |
+
"rstrip": false,
|
178 |
+
"single_word": false,
|
179 |
+
"special": false
|
180 |
+
},
|
181 |
+
"151665": {
|
182 |
+
"content": "<tool_response>",
|
183 |
+
"lstrip": false,
|
184 |
+
"normalized": false,
|
185 |
+
"rstrip": false,
|
186 |
+
"single_word": false,
|
187 |
+
"special": false
|
188 |
+
},
|
189 |
+
"151666": {
|
190 |
+
"content": "</tool_response>",
|
191 |
+
"lstrip": false,
|
192 |
+
"normalized": false,
|
193 |
+
"rstrip": false,
|
194 |
+
"single_word": false,
|
195 |
+
"special": false
|
196 |
+
},
|
197 |
+
"151667": {
|
198 |
+
"content": "<think>",
|
199 |
+
"lstrip": false,
|
200 |
+
"normalized": false,
|
201 |
+
"rstrip": false,
|
202 |
+
"single_word": false,
|
203 |
+
"special": false
|
204 |
+
},
|
205 |
+
"151668": {
|
206 |
+
"content": "</think>",
|
207 |
+
"lstrip": false,
|
208 |
+
"normalized": false,
|
209 |
+
"rstrip": false,
|
210 |
+
"single_word": false,
|
211 |
+
"special": false
|
212 |
+
}
|
213 |
+
},
|
214 |
+
"additional_special_tokens": [
|
215 |
+
"<|im_start|>",
|
216 |
+
"<|im_end|>",
|
217 |
+
"<|object_ref_start|>",
|
218 |
+
"<|object_ref_end|>",
|
219 |
+
"<|box_start|>",
|
220 |
+
"<|box_end|>",
|
221 |
+
"<|quad_start|>",
|
222 |
+
"<|quad_end|>",
|
223 |
+
"<|vision_start|>",
|
224 |
+
"<|vision_end|>",
|
225 |
+
"<|vision_pad|>",
|
226 |
+
"<|image_pad|>",
|
227 |
+
"<|video_pad|>"
|
228 |
+
],
|
229 |
+
"bos_token": null,
|
230 |
+
"clean_up_tokenization_spaces": false,
|
231 |
+
"eos_token": "<|im_end|>",
|
232 |
+
"errors": "replace",
|
233 |
+
"extra_special_tokens": {},
|
234 |
+
"model_max_length": 40960,
|
235 |
+
"pad_token": "<|vision_pad|>",
|
236 |
+
"padding_side": "right",
|
237 |
+
"split_special_tokens": false,
|
238 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
239 |
+
"unk_token": null
|
240 |
+
}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|