kunato commited on
Commit
53d8ff0
·
verified ·
1 Parent(s): 50f99cb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +342 -0
README.md ADDED
@@ -0,0 +1,342 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ pipeline_tag: text-generation
4
+ ---
5
+
6
+ **Llama3.2-Typhoon2-1B**: Thai Large Language Model (Instruct)
7
+
8
+ **Llama3.2-Typhoon2-1B-instruct** is a instruct Thai 🇹🇭 large language model with 1 billion parameters, and it is based on Llama3.2-1B.
9
+
10
+ ```markdown
11
+ | Model | IFEval - TH | IFEval - EN | MT-Bench TH | MT-Bench EN | Thai Code-Switching (t=0.7) | Thai Code-Switching (t=1.0) | FunctionCall-TH | FunctionCall-EN |
12
+ |----------------------------|-------------|-------------|-------------|-------------|-----------------------------|-----------------------------|----------------|----------------|
13
+ | **Typhoon2 1b instruct** | **52.46%** | **53.35%** | **3.9725** | 5.2125 | 96.4% | **88%** | **34.96%** | **45.60%** |
14
+ | **Qwen2.5 1.5b instruct** | 44.42% | 48.45% | 2.9395 | **6.9343** | 82.6% | 20.6% | 13.83% | 17.88% |
15
+ | **llama3.2 1b instruct** | 31.76% | 51.15% | 2.5824 | 6.229 | **97.8%** | 22.6% | 29.88% | 36.50% |
16
+ ```
17
+
18
+
19
+ # TODO add image - general / domain specific / long context
20
+
21
+
22
+ For release post, please see our [blog](...).
23
+ *To acknowledge Meta's effort in creating the foundation model and to comply with the license, we explicitly include "llama-3.1" in the model name.
24
+
25
+ ## **Model Description**
26
+
27
+ - **Model type**: A 1B instruct decoder-only model based on Llama architecture.
28
+ - **Requirement**: transformers 4.45.0 or newer.
29
+ - **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
30
+ - **License**: [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE)
31
+
32
+
33
+ ## Usage Example
34
+
35
+ ```python
36
+ from transformers import AutoTokenizer, AutoModelForCausalLM
37
+ import torch
38
+
39
+ model_id = "scb10x/llama3.2-typhoon2-1b-instruct"
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
42
+ model = AutoModelForCausalLM.from_pretrained(
43
+ model_id,
44
+ torch_dtype=torch.bfloat16,
45
+ device_map="auto",
46
+ )
47
+
48
+ messages = [
49
+ {"role": "system", "content": "You are Typhoon, an AI assistant created by SCB 10X, designed to be helpful, harmless, and honest. Typhoon assists with analysis, answering questions, math, coding, creative writing, teaching, role-play, discussions, and more. Typhoon responds directly without affirmations or filler phrases (e.g., “Certainly,” “Of course”). Responses do not start with “Certainly” in any form. Typhoon adheres to these rules in all languages and always replies in the user's language or as requested. Communicate in fluid, conversational prose, showing genuine interest, empathy, and presenting information clearly and visually."},
50
+ {"role": "user", "content": "ขอสูตรไก่ย่าง"},
51
+ ]
52
+
53
+ input_ids = tokenizer.apply_chat_template(
54
+ messages,
55
+ add_generation_prompt=True,
56
+ return_tensors="pt"
57
+ ).to(model.device)
58
+
59
+ terminators = [
60
+ tokenizer.eos_token_id,
61
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
62
+ ]
63
+
64
+ outputs = model.generate(
65
+ input_ids,
66
+ max_new_tokens=512,
67
+ eos_token_id=terminators,
68
+ do_sample=True,
69
+ temperature=0.4,
70
+ top_p=0.9,
71
+ )
72
+ response = outputs[0][input_ids.shape[-1]:]
73
+ print(tokenizer.decode(response, skip_special_tokens=True))
74
+ ```
75
+
76
+ ## Inference Server Hosting Example
77
+ ```bash
78
+ pip install vllm
79
+ vllm serve scb10x/llama3.2-typhoon2-1b-instruct
80
+ # see more information at https://docs.vllm.ai/
81
+ ```
82
+
83
+
84
+ ## Function-Call Example
85
+ ```python
86
+ import json
87
+ import torch
88
+ from transformers import AutoModelForCausalLM, AutoTokenizer
89
+ import os
90
+ import ast
91
+
92
+ model_name = "scb10x/llama3.2-typhoon2-1b-instruct"
93
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
94
+ model = AutoModelForCausalLM.from_pretrained(
95
+ model_name, torch_dtype=torch.bfloat16
96
+ )
97
+
98
+ get_weather_api = {
99
+ "name": "get_weather",
100
+ "description": "Get the current weather for a location",
101
+ "parameters": {
102
+ "type": "object",
103
+ "properties": {
104
+ "location": {
105
+ "type": "string",
106
+ "description": "The city and state, e.g. San Francisco, New York",
107
+ },
108
+ "unit": {
109
+ "type": "string",
110
+ "enum": ["celsius", "fahrenheit"],
111
+ "description": "The unit of temperature to return",
112
+ },
113
+ },
114
+ "required": ["location"],
115
+ },
116
+ }
117
+
118
+
119
+ search_api = {
120
+ "name": "search",
121
+ "description": "Search for information on the internet",
122
+ "parameters": {
123
+ "type": "object",
124
+ "properties": {
125
+ "query": {
126
+ "type": "string",
127
+ "description": "The search query, e.g. 'latest news on AI'",
128
+ }
129
+ },
130
+ "required": ["query"],
131
+ },
132
+ }
133
+
134
+ get_stock = {
135
+ "name": "get_stock_price",
136
+ "description": "Get the stock price",
137
+ "parameters": {
138
+ "type": "object",
139
+ "properties": {
140
+ "symbol": {
141
+ "type": "string",
142
+ "description": "The stock symbol, e.g. AAPL, GOOG",
143
+ }
144
+ },
145
+ "required": ["symbol"],
146
+ },
147
+ }
148
+ # Tool input are same format with OpenAI tools
149
+ openai_format_tools = [get_weather_api, search_api, get_stock]
150
+
151
+ messages = [
152
+ {"role": "system", "content": "You are helpful assistance."},
153
+ {"role": "user", "content": "ขอราคาหุ้น Tasla (TLS) และ Amazon (AMZ) ?"},
154
+ ]
155
+
156
+ final_prompt = tokenizer.apply_chat_template(
157
+ messages, tools=openai_format_tools, add_generation_prompt=True, tokenize=False
158
+ )
159
+
160
+ inputs = tokenizer.apply_chat_template(
161
+ messages, tools=openai_format_tools, add_generation_prompt=True, return_tensors="pt"
162
+ ).to(model.device)
163
+
164
+ outputs = model.generate(
165
+ inputs,
166
+ max_new_tokens=512,
167
+ do_sample=True,
168
+ temperature=0.7,
169
+ num_return_sequences=1,
170
+ eos_token_id=[tokenizer.eos_token_id, 128009],
171
+ )
172
+ response = outputs[0][input_ids.shape[-1]:]
173
+
174
+ print("Here Output:", tokenizer.decode(response, skip_special_tokens=True))
175
+
176
+
177
+ # Decoding function utility
178
+ def resolve_ast_by_type(value):
179
+ if isinstance(value, ast.Constant):
180
+ if value.value is Ellipsis:
181
+ output = "..."
182
+ else:
183
+ output = value.value
184
+ elif isinstance(value, ast.UnaryOp):
185
+ output = -value.operand.value
186
+ elif isinstance(value, ast.List):
187
+ output = [resolve_ast_by_type(v) for v in value.elts]
188
+ elif isinstance(value, ast.Dict):
189
+ output = {
190
+ resolve_ast_by_type(k): resolve_ast_by_type(v)
191
+ for k, v in zip(value.keys, value.values)
192
+ }
193
+ elif isinstance(
194
+ value, ast.NameConstant
195
+ ): # Added this condition to handle boolean values
196
+ output = value.value
197
+ elif isinstance(
198
+ value, ast.BinOp
199
+ ): # Added this condition to handle function calls as arguments
200
+ output = eval(ast.unparse(value))
201
+ elif isinstance(value, ast.Name):
202
+ output = value.id
203
+ elif isinstance(value, ast.Call):
204
+ if len(value.keywords) == 0:
205
+ output = ast.unparse(value)
206
+ else:
207
+ output = resolve_ast_call(value)
208
+ elif isinstance(value, ast.Tuple):
209
+ output = tuple(resolve_ast_by_type(v) for v in value.elts)
210
+ elif isinstance(value, ast.Lambda):
211
+ output = eval(ast.unparse(value.body[0].value))
212
+ elif isinstance(value, ast.Ellipsis):
213
+ output = "..."
214
+ elif isinstance(value, ast.Subscript):
215
+ try:
216
+ output = ast.unparse(value.body[0].value)
217
+ except:
218
+ output = ast.unparse(value.value) + "[" + ast.unparse(value.slice) + "]"
219
+ else:
220
+ raise Exception(f"Unsupported AST type: {type(value)}")
221
+ return output
222
+
223
+
224
+ def resolve_ast_call(elem):
225
+ func_parts = []
226
+ func_part = elem.func
227
+ while isinstance(func_part, ast.Attribute):
228
+ func_parts.append(func_part.attr)
229
+ func_part = func_part.value
230
+ if isinstance(func_part, ast.Name):
231
+ func_parts.append(func_part.id)
232
+ func_name = ".".join(reversed(func_parts))
233
+ args_dict = {}
234
+ for arg in elem.keywords:
235
+ output = resolve_ast_by_type(arg.value)
236
+ args_dict[arg.arg] = output
237
+ return {func_name: args_dict}
238
+
239
+
240
+ def ast_parse(input_str, language="Python"):
241
+ if language == "Python":
242
+ cleaned_input = input_str.strip("[]'")
243
+ parsed = ast.parse(cleaned_input, mode="eval")
244
+ extracted = []
245
+ if isinstance(parsed.body, ast.Call):
246
+ extracted.append(resolve_ast_call(parsed.body))
247
+ else:
248
+ for elem in parsed.body.elts:
249
+ assert isinstance(elem, ast.Call)
250
+ extracted.append(resolve_ast_call(elem))
251
+ return extracted
252
+ else:
253
+ raise NotImplementedError(f"Unsupported language: {language}")
254
+
255
+
256
+ def parse_nested_value(value):
257
+ """
258
+ Parse a potentially nested value from the AST output.
259
+
260
+ Args:
261
+ value: The value to parse, which could be a nested dictionary, which includes another function call, or a simple value.
262
+
263
+ Returns:
264
+ str: A string representation of the value, handling nested function calls and nested dictionary function arguments.
265
+ """
266
+ if isinstance(value, dict):
267
+ # Check if the dictionary represents a function call (i.e., the value is another dictionary or complex structure)
268
+ if all(isinstance(v, dict) for v in value.values()):
269
+ func_name = list(value.keys())[0]
270
+ args = value[func_name]
271
+ args_str = ", ".join(
272
+ f"{k}={parse_nested_value(v)}" for k, v in args.items()
273
+ )
274
+ return f"{func_name}({args_str})"
275
+ else:
276
+ # If it's a simple dictionary, treat it as key-value pairs
277
+ return (
278
+ "{"
279
+ + ", ".join(f"'{k}': {parse_nested_value(v)}" for k, v in value.items())
280
+ + "}"
281
+ )
282
+ return repr(value)
283
+
284
+
285
+ def decoded_output_to_execution_list(decoded_output):
286
+ """
287
+ Convert decoded output to a list of executable function calls.
288
+
289
+ Args:
290
+ decoded_output (list): A list of dictionaries representing function calls.
291
+
292
+ Returns:
293
+ list: A list of strings, each representing an executable function call.
294
+ """
295
+ execution_list = []
296
+ for function_call in decoded_output:
297
+ for key, value in function_call.items():
298
+ args_str = ", ".join(
299
+ f"{k}={parse_nested_value(v)}" for k, v in value.items()
300
+ )
301
+ execution_list.append(f"{key}({args_str})")
302
+ return execution_list
303
+
304
+
305
+ def default_decode_ast_prompting(result, language="Python"):
306
+ result = result.strip("`\n ")
307
+ if not result.startswith("["):
308
+ result = "[" + result
309
+ if not result.endswith("]"):
310
+ result = result + "]"
311
+ decoded_output = ast_parse(result, language)
312
+ return decoded_output
313
+
314
+
315
+ fc_result = default_decode_ast_prompting(tokenizer.decode(response, skip_special_tokens=True))
316
+ print(fc_result) # [{'Function': {'arguments': '{"symbol": "TLS"}', 'name': 'get_stock_price'}}, {'Function': {'arguments': '{"symbol": "AMZ"}', 'name': 'get_stock_price'}}]
317
+ ```
318
+
319
+ ## **Intended Uses & Limitations**
320
+
321
+ This model is an instructional model. However, it’s still undergoing development. It incorporates some level of guardrails, but it still may produce answers that are inaccurate, biased, or otherwise objectionable in response to user prompts. We recommend that developers assess these risks in the context of their use case.
322
+
323
+ ## **Follow us**
324
+
325
+ **https://twitter.com/opentyphoon**
326
+
327
+ ## **Support**
328
+
329
+ **https://discord.gg/CqyBscMFpg**
330
+
331
+ ## **Citation**
332
+
333
+ - If you find Typhoon2 useful for your work, please cite it using:
334
+ ```
335
+ @article{pipatanakul2023typhoon,
336
+ title={Typhoon: Thai Large Language Models},
337
+ author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
338
+ year={2023},
339
+ journal={arXiv preprint arXiv:2312.13951},
340
+ url={https://arxiv.org/abs/2312.13951}
341
+ }
342
+ ```