File size: 10,290 Bytes
0a615d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d4023e2
 
 
 
0a615d3
 
 
 
d4023e2
0a615d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d4023e2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/qwen3-4b-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
- game
- npc
- gamesoul
- ai
- RAG
- MCP
- 游戏
- Unreal
- Unity
- Cocos
license: apache-2.0
language:
- en
- zh
---

# Model Card for GameSoul-AI-NPC

![Image Alt Text](DocsAsset/GAMESOUL_top_pic.png)

[English] | [中文](README_ZH_CN.md)

🤖 GameSoul-AI-NPC is a game NPC behavior decision model that fuses multi-source information to generate dynamic actions consistent with character settings. It supports real-time environment responses, event reactions, memory retrieval, and character consistency. Capable of invoking Reasoning, RAG (Retrieval-Augmented Generation), and MCP (Multi-Character Planning).

## Model Details 🔍

### Core Capabilities

| Module | Capabilities |
|------|----------|
| **Environment Perception** | Parses scene state, player interactions, time/weather signals |
| **Memory System** | Supports long-term behavior memory (stored in a database) |
| **Character Consistency** | Generates actions according to predefined background (personality, goals, identity, abilities, memory) |
| **Dynamic Decision** | Generates action sequences (movement, dialogue, interaction) based on combined state |

### Architecture🏗️

```mermaid
graph TD
A[🌏 Environment State] --> C(🧠LLM Decision Engine)
B[🧙‍♂️Character Memory] --> C
D[💭Current Event] --> C
C --> E{Behavior Arbitration}
E --> F[👊Action Commands]
E --> G[💬Natural Language Feedback]
E --> H[📌Store Memory]
```

### Extension Interface

🔌 MCP Protocol (reference only): Call MCP via npc_memory_api (example) to fetch database information

```json
// example
{
  "jsonrpc": "2.0",
  "id": 123456789,
  "method": "get_npc_memory",
  "params": {
    "player_action": "steal_item",
    "npc_id": "npc_123456"
  }
}
```

### Model Description 📝

- **Developed by : NewOrigin**
- **Funded by : NewOrigin**
- **Shared by : NewOrigin**
- **Model type : Decoder&MoE Transformer**
- **Language(s) (NLP) : English & Chinese & 110+ languages**
- **License : apache-2.0**
- **Finetuned from model : unsloth/qwen3-4b**

### Model Sources🌐

- **Repository : <https://huggingface.co/unsloth/Qwen3-4B>**

## Uses🚀

This model is a fine-tuned version of unsloth/qwen3-4b, designed to empower NPCs in games with intelligent behavior. It generates dynamic responses based on character background, memory context, and environmental state, supporting Reasoning, RAG, and MCP calls.

Intended users:

- Game developers and designers

- Researchers in game NPC behavior AI

- Game studios and indie developers

- Other interested parties

Potential beneficiaries:

- End players interacting with NPCs

### Direct Use🖥️

The model can be used directly in game environments to generate AI behavior without additional fine-tuning. It produces NPC responses based on context, memory, player actions, and environmental state. Developers can call it via a reasoning interface or integrate it into game logic or RAG workflows.

Typical scenarios include:

- Automated NPC ecosystems for real-time world simulation
- Story and memory-based NPC interactions
- Behavior planning based on environmental conditions
- Multi-agent collaborative reasoning and responses

### Downstream Use📦

This model is suitable for embedding into game systems as the core reasoning and dialogue engine for AI-driven NPCs, integrated with:

- Game engines for real-time dialogue generation and behavior control
- Multi-agent simulation platforms providing long-term memory and contextual reasoning
- RAG-based reasoning systems that enhance NPC decision-making through knowledge retrieval
- MCP protocol-supporting databases to store NPC data and fetch it when needed

Further fine-tuning can be applied to match game tone, settings, or mission requirements.

### Out-of-Scope Use⚠️

This model is not suitable for applications with high-risk or security-sensitive contexts, such as:

- Legal, medical, financial, or safety-critical decision-making
- Scenarios requiring high factual accuracy or ethical reasoning

## Bias, Risks, and Limitations🚧

The model may inherit biases from pre-training or fine-tuning data, including cultural stereotypes, sexual content, gender bias, and character behavior patterns.

Technical limitations include:

- Cannot verify the truthfulness or logical correctness of generated content

Filtering of outputs is recommended, especially when deployed in systems involving minors.

### Recommendations💡

- Thoroughly test the model across various game scenarios before deployment to understand its boundaries and potential failure modes.
- Establish an actionable framework within the game engine before integrating the model.

## How to Get Started with the Model🚩

Example of loading and calling the fine-tuned model:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "NewOrigin/GameSoul-AI-NPC-4B-v0.1"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.auto,  
    device_map="auto"
)

prompt = "input your content"
messages = [
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=False,
    enable_thinking=False
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=1024,
    do_sample=True,
    top_p=0.9,
)

output_ids = generated_ids[0].tolist()
think_token_id = tokenizer.convert_tokens_to_ids("</think>")
if think_token_id in output_ids:
    idx = output_ids.index(think_token_id)
    thinking = tokenizer.decode(output_ids[:idx], skip_special_tokens=True).strip()
    response = tokenizer.decode(output_ids[idx+1:], skip_special_tokens=True).strip()
else:
    thinking, response = "", tokenizer.decode(output_ids, skip_special_tokens=True).strip()

print("🧠 think", thinking)
print("💬 answer", response)
```

### An example of analog input and output📥📤

- Input📥

```json
{
    "NPCID": "npc_585919",
    "Character Background": "A succubus apothecary from the Mysterious Forest, aged 20. She studied herbal medicine and magical knowledge in the forest since childhood. After her homeland suffered evil magic corruption and her family perished, she resolved to find a way to counteract the magic. Proficient in potion brewing, charm magic, and magical perception.",
    "Traits": {
        "Core Personality": [
            "Charismatic",
            "Cunning",
            "Curious"
        ],
        "Special Skills": [
            "Potion Brewing",
            "Charm Magic",
            "Magical Perception"
        ]
    },
    "Dynamic Status": {
        "Current Emotion": "Anger (due to worsening magic pollution in the forest recently, which has weakened her powers)"
    }
},
"Memory Events": [
    {
        "eventid": "evt_20240805_001",
        "timestamp": "2024-08-05",
        "Event Type": "Assistance",
        "Initiator": "player_004",
        "Recipient": "npc_585919",
        "Action": "Provided magical books",
        "Impact": "Developed favorable impression of player_004, gained additional magical energy"
    },
    {
        "eventid": "evt_20240720_002",
        "timestamp": "2024-07-20",
        "Event Type": "Conflict",
        "Initiator": "npc_006",
        "Recipient": "npc_585919",
        "Action": "Stole herbs",
        "Impact": "Developed hostility toward npc_006, increased vigilance"
    },
    {
        "eventid": "evt_20240712_003",
        "timestamp": "2024-07-12",
        "Event Type": "Transaction",
        "Initiator": "player_005",
        "Recipient": "npc_585919",
        "Action": "Purchased potions",
        "Impact": "Earned gold coins, improved mood, used charm magic to enhance transaction"
    },
    {
        "eventid": "evt_20240630_004",
        "timestamp": "2024-06-30",
        "Event Type": "Assistance",
        "Initiator": "npc_585919",
        "Recipient": "player_006",
        "Action": "Healed wounds",
        "Impact": "Generated favorable impression and dependency through magical healing"
    },
    {
        "eventid": "evt_20240615_005",
        "timestamp": "2024-06-15",
        "Event Type": "Exploration",
        "Initiator": "npc_585919",
        "Recipient": "npc_585919",
        "Action": "Discovered new herbs",
        "Impact": "Expanded magical knowledge, enhanced charm"
    }
],
"Current Event": "Encountered a lecherous hero"
}
```

- Output📤

```json
{"Event Reaction": "Upon sensing the hero's harassment, npc_585919 casts a charm spell to induce hallucinations, uses magical perception to track his movements, and sets a trap deep within the forest"}
```

## Training Details🏋️‍♀️

### Training Procedure

This model is fine-tuned from unsloth/qwen3-4b-unsloth-bnb-4bit using the LoRA (Low-Rank Adaptation) method from the Unsloth toolkit for efficient low-resource tuning.

- Fine-tuning method: LoRA

- Trainer: Unsloth SFTTrainer

- Model format: adapter-only weights in safetensors

- Training hardware: Nvidia A10 GPU

## Environmental Impact🌱

- **Hardware Type : cloud server**
- **Cloud Provider : Google Cloud Platform & Alibaba Cloud**
- **Compute Region : NorthAmerica & Asia**
- **Carbon Emitted : <1Kg**

**BibTeX:**

```bibtex
@misc{NewOrigin2025GameSoul-AI-NPC,
  title        = {GameSoul-AI-NPC: A LoRA fine-tuned Qwen3-4B   model for game NPC reasoning and interaction},
  author       = {NewOrigin},
  year         = {2025},
  publisher    = {Hugging Face},
  howpublished = {https://huggingface.co/NewOrigin/GameSoul-AI-NPC-4B-v0.1}
}
```

**APA:**

NewOrigin. (2025). *GameSoul-AI-NPC: A LoRA fine-tuned Qwen3-4B model for game NPC reasoning and interaction*. Hugging Face. <https://huggingface.co/NewOrigin/GameSoul-AI-NPC-4B-v0.1>

## Model Card Authors✍️

- **Author by : NewOrigin**

## Model Card Contact📧

For questions, feedback, or collaboration inquiries, please contact:

**email :<[email protected]>**

### Framework versions

- PEFT 0.16.0