anemll commited on
Commit
508ce38
·
verified ·
1 Parent(s): 0976fe1

Upload folder using huggingface_hub

Browse files
.DS_Store ADDED
Binary file (14.3 kB). View file
 
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - coreml
5
+ - ANE
6
+ - DeepSeek
7
+ - Apple
8
+ - Apple Neural Engine
9
+ - DeepHermes
10
+ ---
11
+ # ANEMLL
12
+
13
+ **ANEMLL** (pronounced like "animal") is an open-source project focused on accelerating the porting of Large Language Models (LLMs) to tensor processors, starting with the Apple Neural Engine (ANE).
14
+
15
+ The goal is to provide a fully open-source pipeline from model conversion to inference for common LLM architectures running on ANE.
16
+
17
+ This enables seamless integration and on-device inference for low-power applications on edge devices, ensuring maximum privacy and security.
18
+
19
+ This is critical for autonomous applications, where models run directly on the device without requiring an internet connection.
20
+
21
+ For more information, visit the [ANEMLL GitHub repository](https://github.com/anemll/anemll).
22
+
23
+
24
+ ---
25
+
26
+ ## License
27
+
28
+ ANEMLL is licensed under the [MIT License](https://opensource.org/license/mit).
29
+ The model is based on Meta's LLaMA 3.2 and may require a separate license.
30
+
31
+ This test model is exclusively for the Meta's LLaMA architecture converted for CoreML, released before the official launch of the ANEMLL repository and minimal documentation. It is intended for early adopters only who requested an early release.
32
+
33
+ ---
34
+
35
+ ## Requirements
36
+
37
+ - **macOS Sequoia** with Apple Neural Engine and 8GB RAM or more
38
+ - **CoreML Tools** and **HuggingFace Transformers** libraries
39
+ - **Python 3.9**
40
+
41
+ `chat.py` provides a sample inference script.
42
+ `chat_full.py` provides a sample inference script with history and conversation management.
43
+
44
+ **Installation**
45
+
46
+ 1. Download the model from Hugging Face:
47
+ ```bash
48
+ # Install required tools
49
+ pip install huggingface_hub
50
+
51
+ # Install Git LFS (Large File Support)
52
+ # macOS with Homebrew:
53
+ brew install git-lfs
54
+ # Or Ubuntu/Debian:
55
+ # sudo apt-get install git-lfs
56
+
57
+ # Initialize Git LFS
58
+ git lfs install
59
+
60
+ # Clone the repository with model files
61
+ git clone https://huggingface.co/anemll/anemll-Llama-3.1-Nemotron-Nano-8B-v1-ctx512_0.3.0
62
+ ```
63
+
64
+ 2. Extract model files:
65
+ ```bash
66
+ # Navigate to cloned directory
67
+ cd anemll-Llama-3.1-Nemotron-Nano-8B-v1-ctx512_0.3.0
68
+
69
+ # Pull LFS files (model weights)
70
+ git lfs pull
71
+
72
+ # Extract CoreML model files
73
+ find . -type f -name "*.zip" -exec unzip {} \;
74
+ ```
75
+
76
+ 3. Install dependencies:
77
+ ```bash
78
+ pip install coremltools transformers
79
+ ```
80
+
81
+ **Coremltools:**
82
+
83
+ See coremltools installation guide at https://coremltools.readme.io/v4.0/docs/installation
84
+
85
+ **How to Run**
86
+
87
+ 1. Basic chat interface:
88
+ ```bash
89
+ python chat.py --meta ./meta.yaml
90
+ ```
91
+
92
+ 2. Full conversation mode with history:
93
+ ```bash
94
+ python chat_full.py --meta ./meta.yaml
95
+ ```
96
+
97
+ > Note: The first time the model loads, macOS will take some time to place it on the device.
98
+ > Subsequent loads will be instantaneous.
99
+ > Use Ctrl-D to exit, Ctrl-C to interrupt inference.
100
+
101
+ **More Info**
102
+ Please check following links for later updates:
103
+
104
+ * [GitHub](https://github.com/anemll)
105
+ * [Hugging Face Models](https://huggingface.co/anemll)
106
+ * [Twitter/X](https://x.com/anemll)
107
+ * [Website](https://anemll.com)
108
+
109
+
110
111
+
112
+ # anemll-Llama-3.1-Nemotron-Nano-8B-v1-ctx512_0.3.0
113
+
114
+ This is a CoreML model converted using ANEMLL for Apple Neural Engine inference.
115
+
116
+ ## Available Distributions
117
+
118
+ ### Standard Distribution
119
+ - Contains zipped MLMODELC files
120
+ - Suitable for macOS and development
121
+
122
+ ### iOS Distribution
123
+ - Contains unzipped MLMODELC files
124
+ - Ready for iOS deployment
125
+ - Includes offline tokenizer support
126
+
127
+ ## Model Information
128
+ - Context Length: %CONTEXT_LENGTH%
129
+ - Batch Size: %BATCH_SIZE%
130
+ - Number of Chunks: %NUM_CHUNKS%
131
+
132
+ ## Quick Start
133
+
134
+ ### Test in iOS/macOS App
135
+ Try our sample Chat-Bot app on TestFlight:
136
+ 1. Install TestFlight from App Store
137
+ 2. Join beta test: [TestFlight Link](https://testflight.apple.com/join/jrQq1D1C)
138
+ 3. App includes a small demo model pre-installed
139
+ 4. You can add custom models via HuggingFace URLs
140
+
141
+ > [!Note]
142
+ > - The TestFlight app works on both iOS and macOS
143
+ > - Demonstrates proper model integration and provides a reference implementation
144
+ > - iOS requires unzipped MLMODELC files and config.json for offline tokenizer
145
+ > - macOS supports both zipped and unzipped model formats
146
+
147
+ ```
chat.py ADDED
@@ -0,0 +1,893 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # chat.py
2
+ #!/usr/bin/env python3
3
+ # chat.py
4
+ # Copyright (c) 2025 Anemll
5
+ # Licensed under the MIT License
6
+
7
+ import argparse
8
+ import os
9
+ import re
10
+ import glob
11
+ from pathlib import Path
12
+ import coremltools as ct
13
+ from transformers import LlamaTokenizer, AutoTokenizer
14
+ import torch
15
+ import torch.nn.functional as F
16
+ import numpy as np
17
+ import queue
18
+ import threading
19
+ import time
20
+ import yaml
21
+ import sys
22
+
23
+ # ANSI color codes
24
+ LIGHT_BLUE = "\033[94m"
25
+ DARK_BLUE = "\033[34m"
26
+ LIGHT_GREEN = "\033[92m"
27
+ RESET_COLOR = "\033[0m"
28
+
29
+ # Add at top with other constants
30
+ WARMUP_TOKEN_LIMIT = 10 # Maximum tokens to generate during warmup
31
+
32
+ class TokenPrinter:
33
+ """Handles background printing of generated tokens."""
34
+ def __init__(self, tokenizer):
35
+ self.tokenizer = tokenizer
36
+ self.token_queue = queue.Queue()
37
+ self.stop_event = threading.Event()
38
+ self.thread = None
39
+ self.buffer = ""
40
+ self.lock = threading.Lock()
41
+ self.thinking = True # Track if we're still in thinking mode
42
+ self.decoding_buffer = [] # Buffer for token IDs
43
+ # Add token counting and timing
44
+ self.start_time = time.time()
45
+ self.token_count = 0
46
+ self.start()
47
+
48
+ def start(self):
49
+ """Start the printer thread."""
50
+ if self.thread is None:
51
+ self.thread = threading.Thread(target=self._print_worker)
52
+ self.thread.daemon = True
53
+ self.thread.start()
54
+
55
+ def add_token(self, token_id):
56
+ """Add a token to the print queue."""
57
+ if not self.stop_event.is_set():
58
+ self.token_queue.put(token_id)
59
+ self.token_count += 1
60
+
61
+ def drain_buffer(self):
62
+ """Decode token IDs from decoding_buffer in the main thread."""
63
+ if not self.decoding_buffer:
64
+ return
65
+
66
+ # Decode all tokens at once in the main thread
67
+ token_str = self.tokenizer.decode(self.decoding_buffer)
68
+ self.decoding_buffer.clear()
69
+
70
+ # Store the text in buffer for later saving to file
71
+ with self.lock:
72
+ self.buffer += token_str
73
+
74
+ # Color-handling logic
75
+ if self.thinking and "</think>" in token_str:
76
+ self.thinking = False
77
+ parts = token_str.split("</think>")
78
+ if len(parts) > 0:
79
+ print(parts[0] + "</think>", end='', flush=True)
80
+ if len(parts) > 1:
81
+ print(LIGHT_BLUE + parts[1], end='', flush=True)
82
+ else:
83
+ if not self.thinking:
84
+ print(LIGHT_BLUE + token_str, end='', flush=True)
85
+ else:
86
+ print(token_str, end='', flush=True)
87
+
88
+ def _print_worker(self):
89
+ """Worker thread that takes token_ids from the queue."""
90
+ while not self.stop_event.is_set():
91
+ try:
92
+ token_id = self.token_queue.get(timeout=0.01)
93
+ with self.lock:
94
+ self.decoding_buffer.append(token_id)
95
+ self.token_queue.task_done()
96
+ except queue.Empty:
97
+ continue
98
+ except Exception as e:
99
+ print(f"\nError: Token printer error: {str(e)}")
100
+ break
101
+
102
+ def stop(self):
103
+ """Stop the printer thread."""
104
+ if self.thread and self.thread.is_alive():
105
+ # Ensure any remaining tokens are processed
106
+ self.drain_buffer()
107
+ self.stop_event.set()
108
+ try:
109
+ self.thread.join(timeout=1.0)
110
+ except Exception:
111
+ pass
112
+ # Calculate and print tokens/s with shorter format in blue
113
+ elapsed = time.time() - self.start_time
114
+ if elapsed > 0 and self.token_count > 0:
115
+ tokens_per_sec = self.token_count / elapsed
116
+ print(f"\n{DARK_BLUE}{tokens_per_sec:.1f} t/s{RESET_COLOR}")
117
+ else:
118
+ print(RESET_COLOR) # Reset color at the end
119
+ return self.buffer
120
+
121
+ def parse_model_path(path):
122
+ """Parse model path and return full path with .mlmodelc or .mlpackage extension."""
123
+ path = Path(path)
124
+
125
+ # If path exists exactly as specified, return it
126
+ if path.exists():
127
+ return str(path)
128
+
129
+ # Try with both extensions
130
+ candidates = [
131
+ path, # Original path
132
+ path.with_suffix('.mlmodelc'), # With .mlmodelc
133
+ path.with_suffix('.mlpackage'), # With .mlpackage
134
+ Path(str(path) + '.mlmodelc'), # Handle case where extension is included
135
+ Path(str(path) + '.mlpackage')
136
+ ]
137
+
138
+ # Try all possible paths
139
+ for candidate in candidates:
140
+ if candidate.exists():
141
+ print(f"Found model at: {candidate}")
142
+ return str(candidate)
143
+
144
+ # If we get here, no valid path was found
145
+ print("\nError: Model not found. Tried following paths:")
146
+ for candidate in candidates:
147
+ print(f" {candidate}")
148
+ raise FileNotFoundError(f"Model not found: {path}")
149
+
150
+ def parse_ffn_filename(path):
151
+ """Parse FFN model filename to extract chunk information."""
152
+ path = Path(path)
153
+ pattern = r'FFN_PF.*_chunk_(\d+)of(\d+)'
154
+ match = re.search(pattern, path.name)
155
+
156
+ if match:
157
+ current_chunk = int(match.group(1))
158
+ total_chunks = int(match.group(2))
159
+ return current_chunk, total_chunks
160
+ return None, None
161
+
162
+ def find_all_chunks(base_path):
163
+ """Find all chunk files matching the base FFN path pattern."""
164
+ path = Path(base_path)
165
+ pattern = re.sub(r'_chunk_\d+of\d+', '_chunk_*', str(path))
166
+ return sorted(glob.glob(pattern))
167
+
168
+ def load_model(path, function_name=None):
169
+ """Load a CoreML model, handling both .mlmodelc and .mlpackage formats."""
170
+ path = Path(path)
171
+ compute_unit = ct.ComputeUnit.CPU_AND_NE
172
+
173
+ try:
174
+ if path.suffix == '.mlmodelc':
175
+ # For compiled models (.mlmodelc), use CompiledMLModel
176
+ if function_name:
177
+ return ct.models.CompiledMLModel(str(path), compute_unit, function_name=function_name)
178
+ else:
179
+ return ct.models.CompiledMLModel(str(path), compute_unit)
180
+ else:
181
+ # For packages (.mlpackage)
182
+ if function_name:
183
+ return ct.models.MLModel(str(path), function_name=function_name)
184
+ else:
185
+ return ct.models.MLModel(str(path))
186
+
187
+ except RuntimeError as e:
188
+ if "valid manifest does not exist" in str(e):
189
+ print(f"\nError: Could not load compiled model at {path}")
190
+ print("This might be because:")
191
+ print("1. The model is not properly compiled")
192
+ print("2. The model was compiled for a different OS version")
193
+ print("3. The model needs to be recompiled")
194
+ print("\nTry using the .mlpackage version instead, or recompile the model.")
195
+ raise
196
+
197
+ def load_metadata(model,args):
198
+ # Extract metadata and config parameters
199
+ metadata = {}
200
+ if hasattr(model, 'user_defined_metadata'):
201
+ meta = model.user_defined_metadata
202
+
203
+ # Extract key parameters with defaults
204
+ metadata['context_length'] = int(meta.get('com.anemll.context_length', 512))
205
+ metadata['state_length'] = int(meta.get('com.anemll.state_length', metadata['context_length'])) # Added state_length
206
+ metadata['batch_size'] = int(meta.get('com.anemll.batch_size', 64))
207
+ metadata['lut_bits'] = int(meta.get('com.anemll.lut_bits', 0))
208
+ metadata['num_chunks'] = int(meta.get('com.anemll.num_chunks', 1))
209
+
210
+ print("\nExtracted Parameters:")
211
+ print(f" Context Length: {metadata['context_length']}")
212
+ print(f" State Length: {metadata['state_length']}")
213
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
214
+ print(f" LUT Bits: {metadata['lut_bits']}")
215
+ print(f" Number of Chunks: {metadata['num_chunks']}")
216
+
217
+ # Print model info
218
+ print("\nModel Info:")
219
+ if 'com.anemll.info' in meta:
220
+ print(f" {meta['com.anemll.info']}")
221
+ if 'com.github.apple.coremltools.version' in meta:
222
+ print(f" CoreML Tools: {meta['com.github.apple.coremltools.version']}")
223
+
224
+ # Print model input/output shapes
225
+ print("\nModel Shapes:")
226
+ if hasattr(model, 'input_description'):
227
+ print(" Inputs:")
228
+ for name, desc in model.input_description.items():
229
+ print(f" {name}: {desc}")
230
+ if hasattr(model, 'output_description'):
231
+ print(" Outputs:")
232
+ for name, desc in model.output_description.items():
233
+ print(f" {name}: {desc}")
234
+ else:
235
+ print("\nWarning: No metadata found in model")
236
+
237
+ # Check if model directory name contains context length pattern (ctxXXX)
238
+ ctx_len = 512
239
+ if args.context_length is None:
240
+ import re
241
+ ctx_match = re.search(r'ctx(\d+)', str(args.d))
242
+ if ctx_match:
243
+ ctx_len0 = int(ctx_match.group(1))
244
+ if 512 <= ctx_len0 <= 8096:
245
+ ctx_len = ctx_len0
246
+ print(f"\nDetected context length {ctx_len} from directory name")
247
+ else:
248
+ print(f"\nWarning: No context length found in directory {ctx_len} from directory name {args.d}")
249
+ else:
250
+ ctx_len = args.context_length
251
+
252
+ # Use defaults or values from args
253
+ metadata['context_length'] = ctx_len
254
+ metadata['state_length'] = ctx_len
255
+ # Get batch size from args or use default
256
+ metadata['batch_size'] = getattr(args, 'batch_size', 64)
257
+ metadata['lut_bits'] = 4
258
+ metadata['num_chunks'] = getattr(args, 'num_chunks', 4)
259
+ print("\nUsing parameters:")
260
+ print(f" Context Length: {metadata['context_length']}")
261
+ print(f" State Length: {metadata['state_length']}")
262
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
263
+ print(f" LUT Bits: {metadata['lut_bits']}")
264
+ print(f" Number of Chunks: {metadata['num_chunks']}")
265
+
266
+ # Override with values from args if they exist
267
+ if hasattr(args, 'batch_size') and args.batch_size is not None:
268
+ metadata['batch_size'] = args.batch_size
269
+ print(f"\nOverriding batch size from args: {args.batch_size}")
270
+ if hasattr(args, 'num_chunks') and args.num_chunks is not None:
271
+ metadata['num_chunks'] = args.num_chunks
272
+ print(f"\nOverriding num chunks from args: {args.num_chunks}")
273
+
274
+ return metadata
275
+
276
+ def load_models(args,metadata):
277
+ """Load all required models and extract metadata."""
278
+ print("\nLoading models...")
279
+
280
+ try:
281
+ # Load embeddings model
282
+ print("\nLoading embeddings model...")
283
+ embed_path = parse_model_path(args.embed)
284
+ print(f"Loading from: {embed_path}")
285
+ embed_model = load_model(embed_path)
286
+ print("Embeddings model loaded successfully")
287
+ metadata = load_metadata(embed_model,args)
288
+
289
+
290
+
291
+ # Load LM head model
292
+ print("\nLoading LM head model...")
293
+ lmhead_path = parse_model_path(args.lmhead)
294
+ print(f"Loading from: {lmhead_path}")
295
+ lmhead_model = load_model(lmhead_path)
296
+ print("LM head model loaded successfully")
297
+
298
+ # Parse FFN path and find chunks if needed
299
+ print("\nLoading FFN+PREFILL model(s)...")
300
+ ffn_path = parse_model_path(args.ffn)
301
+ chunk_no, total_chunks = parse_ffn_filename(ffn_path)
302
+
303
+ ffn_models = []
304
+ if chunk_no and total_chunks:
305
+ print(f"\nDetected chunked FFN+PREFILL model ({total_chunks} chunks)")
306
+ # Find and load all chunks
307
+ chunk_paths = find_all_chunks(ffn_path)
308
+ if len(chunk_paths) != total_chunks:
309
+ raise ValueError(f"Found {len(chunk_paths)} chunks but filename indicates {total_chunks} chunks")
310
+
311
+ for chunk_path in chunk_paths:
312
+ print(f"\nLoading FFN+PREFILL chunk: {Path(chunk_path).name}")
313
+ try:
314
+ # For chunked models, we need both infer and prefill functions
315
+ ffn_models.append({
316
+ 'infer': load_model(chunk_path, function_name='infer'),
317
+ 'prefill': load_model(chunk_path, function_name='prefill')
318
+ })
319
+ print("Chunk loaded successfully")
320
+ except Exception as e:
321
+ print(f"Error loading chunk {chunk_path}: {str(e)}")
322
+ raise
323
+ metadata = load_metadata(ffn_models[0],args)
324
+
325
+ else:
326
+ print("\nLoading single FFN model...")
327
+ ffn_models.append(load_model(ffn_path))
328
+ print("FFN model loaded successfully")
329
+
330
+ return embed_model, ffn_models, lmhead_model, metadata
331
+
332
+ except Exception as e:
333
+ print(f"\nError loading models: {str(e)}")
334
+ print("\nPlease ensure all model files exist and are accessible.")
335
+ print("Expected files:")
336
+ print(f" Embeddings: {args.embed}")
337
+ print(f" LM Head: {args.lmhead}")
338
+ print(f" FFN: {args.ffn}")
339
+ raise
340
+
341
+ # At the top of the file, make this a default path
342
+
343
+ def initialize_tokenizer(model_path=None):
344
+ """Initialize and configure the tokenizer."""
345
+ try:
346
+
347
+
348
+ tokenizer = AutoTokenizer.from_pretrained(
349
+ str(model_path),
350
+ use_fast=False,
351
+ trust_remote_code=True
352
+ )
353
+
354
+ print("\nTokenizer Configuration:")
355
+ print(f"Tokenizer type: {type(tokenizer)}")
356
+ print(f"Tokenizer name: {tokenizer.__class__.__name__}")
357
+ print(f"Vocabulary size: {len(tokenizer)}")
358
+ print(f"Model max length: {tokenizer.model_max_length}")
359
+
360
+ if tokenizer.pad_token is None:
361
+ tokenizer.pad_token = tokenizer.eos_token
362
+ tokenizer.pad_token_id = tokenizer.eos_token_id
363
+ print("Set PAD token to EOS token")
364
+
365
+ tokenizer.padding_side = "left"
366
+
367
+ print(f"\nSpecial Tokens:")
368
+ print(f"PAD token: '{tokenizer.pad_token}' (ID: {tokenizer.pad_token_id})")
369
+ print(f"EOS token: '{tokenizer.eos_token}' (ID: {tokenizer.eos_token_id})")
370
+ print(f"BOS token: '{tokenizer.bos_token}' (ID: {tokenizer.bos_token_id})")
371
+ print(f"UNK token: '{tokenizer.unk_token}' (ID: {tokenizer.unk_token_id})")
372
+
373
+ return tokenizer
374
+
375
+ except Exception as e:
376
+ print(f"\nError: Failed to load tokenizer from {model_path}")
377
+ print(f"Error details: {str(e)}")
378
+ print(f"Error type: {type(e)}")
379
+ print("\nThis code requires a Llama 3.2 model for chat template functionality.")
380
+ print("Please provide the path to a Llama 3.2 model directory.")
381
+ import traceback
382
+ traceback.print_exc()
383
+ raise
384
+
385
+
386
+
387
+ def make_causal_mask(length, start):
388
+ """Create causal attention mask."""
389
+ mask = np.full((1, 1, length, length), -np.inf, dtype=np.float16)
390
+ row_indices = np.arange(length).reshape(length, 1)
391
+ col_indices = np.arange(length).reshape(1, length)
392
+ mask[:, :, col_indices <= (row_indices + start)] = 0
393
+ return mask
394
+
395
+ def initialize_causal_mask(context_length):
396
+ """Initialize causal mask for transformer attention."""
397
+ causal_mask = make_causal_mask(context_length, 0)
398
+ causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
399
+ print(f"\nInitialized causal mask for context length {context_length}")
400
+ return causal_mask
401
+
402
+ def run_prefill(embed_model, ffn_models, input_ids, context_pos, context_length, batch_size=64, state=None, causal_mask=None):
403
+ """Run prefill on the input sequence."""
404
+ # Use provided causal mask or create one if not provided
405
+ if causal_mask is None:
406
+ causal_mask = make_causal_mask(context_length, 0)
407
+ causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
408
+
409
+ # Process in batches
410
+ batch_pos = 0
411
+ while batch_pos < context_pos:
412
+ batch_end = min(batch_pos + batch_size, context_pos)
413
+ current_batch_size = batch_end - batch_pos
414
+
415
+ # Get current batch
416
+ batch_input = input_ids[:, batch_pos:batch_end]
417
+
418
+ # Always pad to full batch size for prefill
419
+ batch_input = F.pad(
420
+ batch_input,
421
+ (0, batch_size - current_batch_size),
422
+ value=0
423
+ )
424
+
425
+ # Generate position IDs for full batch size
426
+ position_ids = torch.arange(batch_size, dtype=torch.int32) # Changed: Always use full batch size
427
+ batch_causal_mask = causal_mask[:, :, :batch_size, :] # Changed: Use full batch size
428
+
429
+ # Run embeddings with proper batch size
430
+ hidden_states = torch.from_numpy(
431
+ embed_model.predict({
432
+ 'input_ids': batch_input.numpy(),
433
+ 'batch_size': np.array([batch_size], dtype=np.int32) # Add batch_size parameter
434
+ })['hidden_states']
435
+ )
436
+
437
+ # Run through FFN chunks with state
438
+ for ffn_model in ffn_models:
439
+ if isinstance(ffn_model, dict):
440
+ inputs = {
441
+ 'hidden_states': hidden_states.numpy(), # [1, 64, hidden_size]
442
+ 'position_ids': position_ids.numpy(), # [64]
443
+ 'causal_mask': batch_causal_mask.numpy(), # [1, 1, 64, context_length]
444
+ 'current_pos': np.array([batch_pos], dtype=np.int32) # [1]
445
+ }
446
+ output = ffn_model['prefill'].predict(inputs, state)
447
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
448
+
449
+ batch_pos = batch_end
450
+
451
+ return torch.tensor([context_pos], dtype=torch.int32)
452
+
453
+ def generate_next_token(embed_model, ffn_models, lmhead_model, input_ids, pos, context_length, state=None, causal_mask=None, temperature=0.0):
454
+ """Generate the next token."""
455
+ # Get current token
456
+ current_token = input_ids[:, pos-1:pos] # [1, 1]
457
+
458
+ # Run embeddings
459
+ hidden_states = torch.from_numpy(
460
+ embed_model.predict({'input_ids': current_token.numpy()})['hidden_states']
461
+ ) # [1, 1, hidden_size]
462
+
463
+ # Create masks
464
+ update_mask = torch.zeros((1, 1, context_length, 1), dtype=torch.float16)
465
+ update_mask[0, 0, pos-1, 0] = 1.0
466
+ position_ids = torch.tensor([pos-1], dtype=torch.int32) # [1]
467
+
468
+ # Use provided causal mask or create one if not provided
469
+ if causal_mask is None:
470
+ causal_mask_data = make_causal_mask(context_length, 0)
471
+ single_causal_mask = torch.tensor(causal_mask_data[:, :, pos-1:pos, :], dtype=torch.float16) # [1, 1, 1, context_length]
472
+ else:
473
+ single_causal_mask = causal_mask[:, :, pos-1:pos, :]
474
+
475
+ # Run through FFN chunks with state
476
+ for ffn_model in ffn_models:
477
+ if isinstance(ffn_model, dict):
478
+ inputs = {
479
+ 'hidden_states': hidden_states.numpy(),
480
+ 'update_mask': update_mask.numpy(),
481
+ 'position_ids': position_ids.numpy(),
482
+ 'causal_mask': single_causal_mask.numpy(),
483
+ 'current_pos': position_ids.numpy()
484
+ }
485
+ output = ffn_model['infer'].predict(inputs, state)
486
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
487
+
488
+ # Run LM head
489
+ lm_output = lmhead_model.predict({'hidden_states': hidden_states.numpy()})
490
+ # Debug print
491
+ #print("\nLM Head output keys:", list(lm_output.keys()))
492
+
493
+ # Combine logits1-8 if they exist
494
+ if 'logits1' in lm_output:
495
+ # Concatenate all logits parts
496
+ logits_parts = []
497
+ for i in range(1, 9):
498
+ key = f'logits{i}'
499
+ if key in lm_output:
500
+ logits_parts.append(torch.from_numpy(lm_output[key]))
501
+ logits = torch.cat(logits_parts, dim=-1) # Concatenate along vocab dimension
502
+ else:
503
+ # Try output_logits as fallback
504
+ logits = torch.from_numpy(lm_output['output_logits'])
505
+
506
+ # Apply temperature and sample
507
+ if temperature > 0:
508
+ logits = logits / temperature
509
+ probs = F.softmax(logits[0, -1, :], dim=-1)
510
+ next_token = torch.multinomial(probs, num_samples=1).item()
511
+ else:
512
+ next_token = torch.argmax(logits[0, -1, :]).item()
513
+
514
+ return next_token
515
+
516
+ def create_unified_state(ffn_models, context_length):
517
+ """Create unified KV cache state for transformer."""
518
+ if isinstance(ffn_models[0], dict):
519
+ # Use first FFN model's prefill function to create state
520
+ state = ffn_models[0]['prefill'].make_state()
521
+ print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
522
+ return state
523
+ else:
524
+ state = ffn_models[0].make_state()
525
+ print("\nCreated unified transformer state")
526
+ return state
527
+
528
+ def chat_loop(embed_model, ffn_models, lmhead_model, tokenizer, metadata, state, causal_mask=None, auto_prompt=None, warmup=False, save_file=None):
529
+ """Interactive chat loop."""
530
+ context_length = metadata.get('context_length')
531
+ batch_size = metadata.get('batch_size', 64)
532
+
533
+ if not warmup:
534
+ print(f"\nUsing context length: {context_length}")
535
+ print("\nStarting chat session. Press Ctrl+D to exit.")
536
+ print("Type your message and press Enter to chat.")
537
+
538
+ # Check if tokenizer has chat template and if it works
539
+ has_chat_template = False
540
+ try:
541
+ # Test if chat template works
542
+ test_messages = [{"role": "user", "content": "test"}]
543
+ tokenizer.apply_chat_template(test_messages, return_tensors="pt")
544
+ has_chat_template = True
545
+ if not warmup:
546
+ print("\nUsing chat template for prompts")
547
+ except:
548
+ if not warmup:
549
+ print("\nUsing manual formatting for prompts")
550
+
551
+ conversation = []
552
+
553
+ try:
554
+ while True:
555
+ try:
556
+ if not warmup:
557
+ print(f"\n{LIGHT_GREEN}You:{RESET_COLOR}", end=' ', flush=True)
558
+ if auto_prompt is not None:
559
+ user_input = auto_prompt
560
+ if not warmup:
561
+ print(user_input)
562
+ else:
563
+ user_input = input().strip()
564
+ except EOFError:
565
+ if not warmup:
566
+ print("\nExiting chat...")
567
+ break
568
+
569
+ if not user_input:
570
+ continue
571
+
572
+ # Format prompt based on tokenizer capabilities
573
+ if has_chat_template:
574
+ messages = [{"role": "user", "content": user_input}]
575
+ input_ids = tokenizer.apply_chat_template(
576
+ messages,
577
+ return_tensors="pt",
578
+ add_generation_prompt=True
579
+ ).to(torch.int32)
580
+ else:
581
+ # Manual formatting for Llama models without chat template
582
+ formatted_prompt = f"[INST] {user_input} [/INST]"
583
+ input_ids = tokenizer(
584
+ formatted_prompt,
585
+ return_tensors="pt",
586
+ add_special_tokens=True
587
+ ).input_ids.to(torch.int32)
588
+
589
+ context_pos = input_ids.size(1)
590
+
591
+ if not warmup:
592
+ print(f"\n{LIGHT_BLUE}Assistant:{RESET_COLOR}", end=' ', flush=True)
593
+
594
+ # Initialize token printer
595
+ token_printer = TokenPrinter(tokenizer)
596
+ tokens_generated = 0 # Track number of tokens
597
+
598
+ try:
599
+ # Start prefill timing
600
+ prefill_start = time.time()
601
+
602
+ # Run prefill with state and causal mask
603
+ current_pos = run_prefill(
604
+ embed_model,
605
+ ffn_models,
606
+ input_ids,
607
+ context_pos,
608
+ context_length,
609
+ batch_size,
610
+ state,
611
+ causal_mask
612
+ )
613
+
614
+ # Calculate prefill timing
615
+ prefill_time = time.time() - prefill_start
616
+ prefill_tokens = context_pos # Number of tokens in input
617
+ prefill_tokens_per_sec = prefill_tokens / prefill_time if prefill_time > 0 else 0
618
+
619
+ # Generation loop with state
620
+ input_ids = input_ids
621
+ pos = context_pos
622
+ inference_start = time.time()
623
+ inference_tokens = 0
624
+
625
+ while pos < context_length - 1:
626
+ # Generate next token with causal mask
627
+ next_token = generate_next_token(
628
+ embed_model,
629
+ ffn_models,
630
+ lmhead_model,
631
+ input_ids,
632
+ pos,
633
+ context_length,
634
+ state,
635
+ causal_mask
636
+ )
637
+
638
+ # Add token to sequence
639
+ if pos < input_ids.size(1):
640
+ input_ids[0, pos] = next_token
641
+ else:
642
+ input_ids = torch.cat([
643
+ input_ids,
644
+ torch.tensor([[next_token]], dtype=torch.int32)
645
+ ], dim=1)
646
+
647
+ # Add to printer only if not in warmup
648
+ if not warmup:
649
+ token_printer.add_token(next_token)
650
+ token_printer.drain_buffer()
651
+
652
+ pos += 1
653
+ tokens_generated += 1
654
+ inference_tokens += 1
655
+
656
+ # Check limits
657
+ if warmup and tokens_generated >= WARMUP_TOKEN_LIMIT:
658
+ break
659
+
660
+ if next_token == tokenizer.eos_token_id:
661
+ break
662
+
663
+ # Calculate inference timing
664
+ inference_time = time.time() - inference_start
665
+ inference_tokens_per_sec = inference_tokens / inference_time if inference_time > 0 else 0
666
+
667
+ # Get final response and add to conversation
668
+ if not warmup:
669
+ response = token_printer.stop()
670
+ # Print timing stats
671
+ prefill_ms = prefill_time * 1000 # Convert to milliseconds
672
+ print(f"\nPrefill: {prefill_ms:.1f}ms ({prefill_tokens_per_sec:.1f} t/s)")
673
+ print(f"Inference: {inference_tokens_per_sec:.1f} t/s")
674
+ print(f"Total: Generated {tokens_generated} tokens in {prefill_time + inference_time:.2f}s")
675
+ conversation.append({"role": "assistant", "content": response})
676
+
677
+ # Save response to file if requested
678
+ if save_file:
679
+ try:
680
+ # Add small delay to ensure all tokens are processed
681
+ time.sleep(0.5)
682
+
683
+ # Make sure response ends with EOS token if it's supposed to
684
+ if response and not response.endswith("<|eot_id|>") and not response.endswith("</s>"):
685
+ if tokenizer.eos_token:
686
+ eos_text = tokenizer.decode([tokenizer.eos_token_id])
687
+ if not response.endswith(eos_text):
688
+ print(f"\n{DARK_BLUE}Adding missing EOS token for consistency{RESET_COLOR}")
689
+ response += eos_text
690
+
691
+ with open(save_file, 'w') as f:
692
+ f.write(response)
693
+ print(f"\n{DARK_BLUE}Response saved to file: {save_file}{RESET_COLOR}")
694
+ except Exception as e:
695
+ print(f"\n{DARK_BLUE}Error saving to file: {str(e)}{RESET_COLOR}")
696
+ else:
697
+ token_printer.stop() # Clean up without printing stats
698
+
699
+ # Exit after one response in auto_prompt mode
700
+ if auto_prompt is not None:
701
+ break
702
+
703
+ except KeyboardInterrupt:
704
+ print("\nGeneration interrupted")
705
+ token_printer.stop()
706
+ continue
707
+
708
+ except Exception as e:
709
+ print(f"\nError in chat loop: {str(e)}")
710
+ import traceback
711
+ traceback.print_exc()
712
+
713
+ def parse_args():
714
+ parser = argparse.ArgumentParser(description='Chat with CoreML LLaMA, gil resolved (c) 2025 Anemll')
715
+
716
+ # Add meta.yaml option
717
+ parser.add_argument('--meta', type=str, help='Path to meta.yaml to load all parameters')
718
+
719
+ # Model paths
720
+ parser.add_argument('--d', '--dir', type=str, default='.',
721
+ help='Directory containing model files (default: current directory)')
722
+ parser.add_argument('--embed', type=str, required=False,
723
+ help='Path to embeddings model (relative to --dir)')
724
+ parser.add_argument('--ffn', type=str, required=False,
725
+ help='Path to FFN model (can be chunked, relative to --dir)')
726
+ parser.add_argument('--lmhead', type=str, required=False,
727
+ help='Path to LM head model (relative to --dir)')
728
+ parser.add_argument('--tokenizer', type=str, required=False,
729
+ help='Path to tokenizer')
730
+
731
+ # Add new argument for auto-generation
732
+ parser.add_argument('--prompt', type=str,
733
+ help='If specified, run once with this prompt and exit')
734
+
735
+ # Add save option
736
+ parser.add_argument('--save', type=str,
737
+ help='Save assistant\'s response to specified file')
738
+
739
+ # Add no-warmup flag
740
+ parser.add_argument('--nw', action='store_true',
741
+ help='Skip warmup phase')
742
+
743
+ # Model configuration
744
+ parser.add_argument('--context-length', type=int,
745
+ help='Context length for the model (default: 512), if not provided, it will be detected from the model directory name ctxNUMBER')
746
+ parser.add_argument('--batch-size', type=int,
747
+ help='Batch size for prefill (default: 64)')
748
+
749
+ args = parser.parse_args()
750
+
751
+ # If meta.yaml is provided, load parameters from it
752
+ if args.meta:
753
+ try:
754
+ with open(args.meta, 'r') as f:
755
+ meta = yaml.safe_load(f)
756
+ params = meta['model_info']['parameters']
757
+
758
+ # Set model directory to meta.yaml directory if not specified
759
+ if not args.d or args.d == '.':
760
+ args.d = str(Path(args.meta).parent)
761
+
762
+ # Build model paths based on parameters
763
+ prefix = params.get('model_prefix', 'llama') # Default to 'llama' if not specified
764
+ lut_ffn = f"_lut{params['lut_ffn']}" if params['lut_ffn'] != 'none' else ''
765
+ lut_lmhead = f"_lut{params['lut_lmhead']}" if params['lut_lmhead'] != 'none' else ''
766
+ lut_embeddings = f"_lut{params['lut_embeddings']}" if params['lut_embeddings'] != 'none' else ''
767
+ num_chunks = int(params['num_chunks'])
768
+
769
+ # Set model paths if not specified
770
+ if not args.lmhead:
771
+ args.lmhead = f'{prefix}_lm_head{lut_lmhead}'
772
+ if not args.embed:
773
+ args.embed = f'{prefix}_embeddings{lut_embeddings}' # Changed from lm_head to embeddings
774
+ if not args.ffn:
775
+ args.ffn = f'{prefix}_FFN_PF{lut_ffn}_chunk_01of{num_chunks:02d}'
776
+ if not args.tokenizer:
777
+ args.tokenizer = args.d
778
+
779
+ # Set other parameters if not overridden by command line
780
+ if args.context_length is None:
781
+ args.context_length = int(params['context_length'])
782
+ if args.batch_size is None:
783
+ args.batch_size = int(params['batch_size'])
784
+ args.num_chunks = num_chunks
785
+
786
+ print(f"\nLoaded parameters from {args.meta}:")
787
+ print(f" Context Length: {args.context_length}")
788
+ print(f" Batch Size: {args.batch_size}")
789
+ print(f" Num Chunks: {args.num_chunks}")
790
+ print(f" Models Directory: {args.d}")
791
+ print(f" Embeddings: {args.embed}")
792
+ print(f" LM Head: {args.lmhead}")
793
+ print(f" FFN: {args.ffn}")
794
+
795
+ except Exception as e:
796
+ print(f"\nError loading meta.yaml: {str(e)}")
797
+ sys.exit(1)
798
+
799
+ return args
800
+
801
+ def main():
802
+ args = parse_args()
803
+
804
+ # Convert directory to absolute path
805
+ model_dir = Path(args.d).resolve()
806
+ if not model_dir.exists():
807
+ print(f"\nError: Model directory not found: {model_dir}")
808
+ return 1
809
+
810
+ print(f"\nUsing model directory: {model_dir}")
811
+ print(f"Context length: {args.context_length}")
812
+
813
+ try:
814
+ # Update paths to be relative to model directory
815
+ args.embed = str(model_dir / args.embed)
816
+ args.ffn = str(model_dir / args.ffn)
817
+ args.lmhead = str(model_dir / args.lmhead)
818
+
819
+ # Handle tokenizer path separately since it's not relative to model_dir
820
+ if args.tokenizer is None:
821
+ args.tokenizer = str(model_dir)
822
+
823
+ if not Path(args.tokenizer).exists():
824
+ print(f"\nError: Tokenizer directory not found: {args.tokenizer}")
825
+ return 1
826
+
827
+ args.tokenizer = str(Path(args.tokenizer).resolve()) # Convert to absolute path
828
+ print(f"Using tokenizer path: {args.tokenizer}")
829
+
830
+ metadata = {}
831
+ # Load models and extract metadata
832
+ embed_model, ffn_models, lmhead_model, metadata = load_models(args,metadata)
833
+
834
+ print(f"\nMetadata befor args.context_length: {metadata}")
835
+
836
+ # Override context length from command line if provided
837
+ if args.context_length is not None:
838
+ metadata['context_length'] = args.context_length
839
+ metadata['state_length'] = args.context_length # Also update state_length
840
+ print(f"\nOverriding context length from command line: {args.context_length}")
841
+
842
+ print(f"\nMetadata after load_models: {metadata}")
843
+
844
+ # Load tokenizer with resolved path
845
+ tokenizer = initialize_tokenizer(args.tokenizer)
846
+ if tokenizer is None:
847
+ raise RuntimeError("Failed to initialize tokenizer")
848
+
849
+ # Create unified state once
850
+ state = create_unified_state(ffn_models, metadata['context_length'])
851
+
852
+ # Initialize causal mask once
853
+ causal_mask = initialize_causal_mask(metadata['context_length'])
854
+
855
+ # Warmup runs to prevent Python GIL issues with CoreML !
856
+ if not args.nw:
857
+ for i in range(2):
858
+ chat_loop(
859
+ embed_model=embed_model,
860
+ ffn_models=ffn_models,
861
+ lmhead_model=lmhead_model,
862
+ tokenizer=tokenizer,
863
+ metadata=metadata,
864
+ state=state,
865
+ causal_mask=causal_mask, # Pass the causal mask
866
+ warmup=True,
867
+ auto_prompt="who are you?"
868
+ )
869
+
870
+ # Main run
871
+ chat_loop(
872
+ embed_model=embed_model,
873
+ ffn_models=ffn_models,
874
+ lmhead_model=lmhead_model,
875
+ tokenizer=tokenizer,
876
+ metadata=metadata,
877
+ state=state,
878
+ causal_mask=causal_mask, # Pass the causal mask
879
+ warmup=False,
880
+ auto_prompt=args.prompt,
881
+ save_file=args.save
882
+ )
883
+
884
+ except Exception as e:
885
+ print(f"\nError: {str(e)}")
886
+ import traceback
887
+ traceback.print_exc()
888
+ return 1
889
+
890
+ return 0
891
+
892
+ if __name__ == "__main__":
893
+ exit(main())
chat_full.py ADDED
@@ -0,0 +1,976 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # chat.py
2
+ #!/usr/bin/env python3
3
+ # chat.py
4
+ # Copyright (c) 2025 Anemll
5
+ # Licensed under the MIT License
6
+
7
+ import argparse
8
+ import os
9
+ import re
10
+ import glob
11
+ from pathlib import Path
12
+ import coremltools as ct
13
+ from transformers import LlamaTokenizer, AutoTokenizer
14
+ import torch
15
+ import torch.nn.functional as F
16
+ import numpy as np
17
+ import queue
18
+ import threading
19
+ import time
20
+ import yaml
21
+ import sys
22
+
23
+ # ANSI color codes
24
+ LIGHT_BLUE = "\033[94m"
25
+ DARK_BLUE = "\033[34m"
26
+ LIGHT_GREEN = "\033[92m"
27
+ RESET_COLOR = "\033[0m"
28
+
29
+ # Add at the top with other constants
30
+ WARMUP_TOKEN_LIMIT = 10 # Maximum tokens to generate during warmup
31
+ THINKING_MODE = False
32
+ THINKING_PROMPT = """You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem."""
33
+ DEBUG_LEVEL = 0 # Default debug level
34
+
35
+ class TokenPrinter:
36
+ """Handles background printing of generated tokens."""
37
+ def __init__(self, tokenizer):
38
+ self.tokenizer = tokenizer
39
+ self.token_queue = queue.Queue()
40
+ self.stop_event = threading.Event()
41
+ self.thread = None
42
+ self.buffer = ""
43
+ self.lock = threading.Lock()
44
+ self.thinking = True # Track if we're still in thinking mode
45
+ self.decoding_buffer = [] # Buffer for token IDs
46
+ # Timing and stats tracking
47
+ self.start_time = time.time()
48
+ self.token_count = 0
49
+ self.prefill_time = 0
50
+ self.inference_time = 0
51
+ self.context_pos = 0
52
+ self.start()
53
+
54
+ def start(self):
55
+ """Start the printer thread."""
56
+ if self.thread is None:
57
+ self.thread = threading.Thread(target=self._print_worker)
58
+ self.thread.daemon = True
59
+ self.thread.start()
60
+
61
+ def add_token(self, token_id):
62
+ """Add a token to the print queue."""
63
+ if not self.stop_event.is_set():
64
+ self.token_queue.put(token_id)
65
+ self.token_count += 1
66
+
67
+ def drain_buffer(self):
68
+ """Decode token IDs from decoding_buffer in the main thread."""
69
+ if not self.decoding_buffer:
70
+ return
71
+
72
+ # Decode all tokens at once in the main thread
73
+ token_str = self.tokenizer.decode(self.decoding_buffer)
74
+ self.decoding_buffer.clear()
75
+
76
+ # Color-handling logic
77
+ if self.thinking and "</think>" in token_str:
78
+ self.thinking = False
79
+ parts = token_str.split("</think>")
80
+ if len(parts) > 0:
81
+ print(parts[0] + "</think>", end='', flush=True)
82
+ if len(parts) > 1:
83
+ print(LIGHT_BLUE + parts[1], end='', flush=True)
84
+ else:
85
+ if not self.thinking:
86
+ print(LIGHT_BLUE + token_str, end='', flush=True)
87
+ else:
88
+ print(token_str, end='', flush=True)
89
+
90
+ def _print_worker(self):
91
+ """Worker thread that takes token_ids from the queue."""
92
+ while not self.stop_event.is_set():
93
+ try:
94
+ token_id = self.token_queue.get(timeout=0.01)
95
+ with self.lock:
96
+ self.decoding_buffer.append(token_id)
97
+ self.token_queue.task_done()
98
+ except queue.Empty:
99
+ continue
100
+ except Exception as e:
101
+ print(f"\nError: Token printer error: {str(e)}")
102
+ break
103
+
104
+ def stop(self):
105
+ """Stop the printer thread."""
106
+ if self.thread and self.thread.is_alive():
107
+ self.stop_event.set()
108
+ try:
109
+ self.thread.join(timeout=1.0)
110
+ except Exception:
111
+ pass
112
+ print(RESET_COLOR) # Reset color at the end
113
+ return self.buffer
114
+
115
+ def set_timing(self, prefill_time, inference_time, context_pos):
116
+ """Set timing information."""
117
+ self.prefill_time = prefill_time
118
+ self.inference_time = inference_time
119
+ self.context_pos = context_pos
120
+
121
+ def parse_model_path(path):
122
+ """Parse model path and return full path with .mlmodelc or .mlpackage extension."""
123
+ path = Path(path)
124
+
125
+ # If path exists exactly as specified, return it
126
+ if path.exists():
127
+ return str(path)
128
+
129
+ # Try with both extensions
130
+ candidates = [
131
+ path, # Original path
132
+ path.with_suffix('.mlmodelc'), # With .mlmodelc
133
+ path.with_suffix('.mlpackage'), # With .mlpackage
134
+ Path(str(path) + '.mlmodelc'), # Handle case where extension is included
135
+ Path(str(path) + '.mlpackage')
136
+ ]
137
+
138
+ # Try all possible paths
139
+ for candidate in candidates:
140
+ if candidate.exists():
141
+ print(f"Found model at: {candidate}")
142
+ return str(candidate)
143
+
144
+ # If we get here, no valid path was found
145
+ print("\nError: Model not found. Tried following paths:")
146
+ for candidate in candidates:
147
+ print(f" {candidate}")
148
+ raise FileNotFoundError(f"Model not found: {path}")
149
+
150
+ def parse_ffn_filename(path):
151
+ """Parse FFN model filename to extract chunk information."""
152
+ path = Path(path)
153
+ pattern = r'FFN_PF.*_chunk_(\d+)of(\d+)'
154
+ match = re.search(pattern, path.name)
155
+
156
+ if match:
157
+ current_chunk = int(match.group(1))
158
+ total_chunks = int(match.group(2))
159
+ return current_chunk, total_chunks
160
+ return None, None
161
+
162
+ def find_all_chunks(base_path):
163
+ """Find all chunk files matching the base FFN path pattern."""
164
+ path = Path(base_path)
165
+ pattern = re.sub(r'_chunk_\d+of\d+', '_chunk_*', str(path))
166
+ return sorted(glob.glob(pattern))
167
+
168
+ def load_model(path, function_name=None):
169
+ """Load a CoreML model, handling both .mlmodelc and .mlpackage formats."""
170
+ path = Path(path)
171
+ compute_unit = ct.ComputeUnit.CPU_AND_NE
172
+
173
+ try:
174
+ if path.suffix == '.mlmodelc':
175
+ # For compiled models (.mlmodelc), use CompiledMLModel
176
+ if function_name:
177
+ return ct.models.CompiledMLModel(str(path), compute_unit, function_name=function_name)
178
+ else:
179
+ return ct.models.CompiledMLModel(str(path), compute_unit)
180
+ else:
181
+ # For packages (.mlpackage)
182
+ if function_name:
183
+ return ct.models.MLModel(str(path), function_name=function_name)
184
+ else:
185
+ return ct.models.MLModel(str(path))
186
+
187
+ except RuntimeError as e:
188
+ if "valid manifest does not exist" in str(e):
189
+ print(f"\nError: Could not load compiled model at {path}")
190
+ print("This might be because:")
191
+ print("1. The model is not properly compiled")
192
+ print("2. The model was compiled for a different OS version")
193
+ print("3. The model needs to be recompiled")
194
+ print("\nTry using the .mlpackage version instead, or recompile the model.")
195
+ raise
196
+
197
+ def parse_args():
198
+ parser = argparse.ArgumentParser(description='Full Chat with CoreML LLaMA with context window shifting, gil resolved (c) 2025 Anemll')
199
+
200
+ # Add meta.yaml option
201
+ parser.add_argument('--meta', type=str, help='Path to meta.yaml to load all parameters')
202
+
203
+ # Add existing arguments
204
+ parser.add_argument('--d', '--dir', type=str, default='.',
205
+ help='Directory containing model files (default: current directory)')
206
+ parser.add_argument('--embed', type=str, required=False,
207
+ help='Path to embeddings model (relative to --dir)')
208
+ parser.add_argument('--ffn', type=str, required=False,
209
+ help='Path to FFN model (can be chunked, relative to --dir)')
210
+ parser.add_argument('--lmhead', type=str, required=False,
211
+ help='Path to LM head model (relative to --dir)')
212
+ parser.add_argument('--tokenizer', type=str, required=False,
213
+ help='Path to tokenizer')
214
+
215
+ # Add new argument for auto-generation
216
+ parser.add_argument('--prompt', type=str,
217
+ help='If specified, run once with this prompt and exit')
218
+
219
+ # Add no-warmup flag
220
+ parser.add_argument('--nw', action='store_true',
221
+ help='Skip warmup phase')
222
+
223
+ # Add debug level
224
+ parser.add_argument('--debug-level', type=int, default=0,
225
+ help='Debug level (0=none, 1=print prompts, 2=more verbose)')
226
+
227
+ # Model configuration
228
+ parser.add_argument('--context-length', type=int,
229
+ help='Context length for the model (default: 512), if not provided, it will be detected from the model directory name ctxNUMBER')
230
+ parser.add_argument('--batch-size', type=int,
231
+ help='Batch size for prefill (default: 64)')
232
+
233
+ args = parser.parse_args()
234
+
235
+ # If meta.yaml is provided, load parameters from it
236
+ if args.meta:
237
+ try:
238
+ with open(args.meta, 'r') as f:
239
+ meta = yaml.safe_load(f)
240
+ params = meta['model_info']['parameters']
241
+
242
+ # Set model directory to meta.yaml directory if not specified
243
+ if not args.d or args.d == '.':
244
+ args.d = str(Path(args.meta).parent)
245
+
246
+ # Build model paths based on parameters
247
+ prefix = params.get('model_prefix', 'llama') # Default to 'llama' if not specified
248
+ lut_ffn = f"_lut{params['lut_ffn']}" if params['lut_ffn'] != 'none' else ''
249
+ lut_lmhead = f"_lut{params['lut_lmhead']}" if params['lut_lmhead'] != 'none' else ''
250
+ lut_embeddings = f"_lut{params['lut_embeddings']}" if params['lut_embeddings'] != 'none' else ''
251
+ num_chunks = int(params['num_chunks'])
252
+
253
+ # Set model paths if not specified
254
+ if not args.lmhead:
255
+ args.lmhead = f'{prefix}_lm_head{lut_lmhead}'
256
+ if not args.embed:
257
+ args.embed = f'{prefix}_embeddings{lut_embeddings}' # Changed from lm_head to embeddings
258
+ if not args.ffn:
259
+ args.ffn = f'{prefix}_FFN_PF{lut_ffn}_chunk_01of{num_chunks:02d}'
260
+ if not args.tokenizer:
261
+ args.tokenizer = args.d
262
+
263
+ # Set other parameters if not overridden by command line
264
+ if args.context_length is None:
265
+ args.context_length = int(params['context_length'])
266
+ if args.batch_size is None:
267
+ args.batch_size = int(params['batch_size'])
268
+ args.num_chunks = num_chunks
269
+
270
+ print(f"\nLoaded parameters from {args.meta}:")
271
+ print(f" Context Length: {args.context_length}")
272
+ print(f" Batch Size: {args.batch_size}")
273
+ print(f" Num Chunks: {args.num_chunks}")
274
+ print(f" Models Directory: {args.d}")
275
+ print(f" Embeddings: {args.embed}")
276
+ print(f" LM Head: {args.lmhead}")
277
+ print(f" FFN: {args.ffn}")
278
+
279
+ except Exception as e:
280
+ print(f"\nError loading meta.yaml: {str(e)}")
281
+ sys.exit(1)
282
+
283
+ return args
284
+
285
+ def load_metadata(model,args):
286
+ # Extract metadata and config parameters
287
+ metadata = {}
288
+ if hasattr(model, 'user_defined_metadata'):
289
+ meta = model.user_defined_metadata
290
+
291
+ # Extract key parameters with defaults
292
+ metadata['context_length'] = int(meta.get('com.anemll.context_length', 512))
293
+ metadata['state_length'] = int(meta.get('com.anemll.state_length', metadata['context_length'])) # Added state_length
294
+ metadata['batch_size'] = int(meta.get('com.anemll.batch_size', 64))
295
+ metadata['lut_bits'] = int(meta.get('com.anemll.lut_bits', 0))
296
+ metadata['num_chunks'] = int(meta.get('com.anemll.num_chunks', 1))
297
+
298
+ print("\nExtracted Parameters:")
299
+ print(f" Context Length: {metadata['context_length']}")
300
+ print(f" State Length: {metadata['state_length']}")
301
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
302
+ print(f" LUT Bits: {metadata['lut_bits']}")
303
+ print(f" Number of Chunks: {metadata['num_chunks']}")
304
+
305
+ # Print model info
306
+ print("\nModel Info:")
307
+ if 'com.anemll.info' in meta:
308
+ print(f" {meta['com.anemll.info']}")
309
+ if 'com.github.apple.coremltools.version' in meta:
310
+ print(f" CoreML Tools: {meta['com.github.apple.coremltools.version']}")
311
+
312
+ # Print model input/output shapes
313
+ print("\nModel Shapes:")
314
+ if hasattr(model, 'input_description'):
315
+ print(" Inputs:")
316
+ for name, desc in model.input_description.items():
317
+ print(f" {name}: {desc}")
318
+ if hasattr(model, 'output_description'):
319
+ print(" Outputs:")
320
+ for name, desc in model.output_description.items():
321
+ print(f" {name}: {desc}")
322
+ else:
323
+ print("\nWarning: No metadata found in model")
324
+
325
+ # Check if model directory name contains context length pattern (ctxXXX)
326
+ ctx_len = 512
327
+ if args.context_length is None:
328
+ import re
329
+ ctx_match = re.search(r'ctx(\d+)', str(args.d))
330
+ if ctx_match:
331
+ ctx_len0 = int(ctx_match.group(1))
332
+ if 512 <= ctx_len0 <= 8096:
333
+ ctx_len = ctx_len0
334
+ print(f"\nDetected context length {ctx_len} from directory name")
335
+ else:
336
+ print(f"\nWarning: No context length found in directory {ctx_len} from directory name {args.d}")
337
+ else:
338
+ ctx_len = args.context_length
339
+
340
+ # Use defaults or values from args
341
+ metadata['context_length'] = ctx_len
342
+ metadata['state_length'] = ctx_len
343
+ # Get batch size from args or use default
344
+ metadata['batch_size'] = getattr(args, 'batch_size', 64)
345
+ metadata['lut_bits'] = 4
346
+ metadata['num_chunks'] = getattr(args, 'num_chunks', 4)
347
+ print("\nUsing parameters:")
348
+ print(f" Context Length: {metadata['context_length']}")
349
+ print(f" State Length: {metadata['state_length']}")
350
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
351
+ print(f" LUT Bits: {metadata['lut_bits']}")
352
+ print(f" Number of Chunks: {metadata['num_chunks']}")
353
+
354
+ # Override with values from args if they exist
355
+ if hasattr(args, 'batch_size') and args.batch_size is not None:
356
+ metadata['batch_size'] = args.batch_size
357
+ print(f"\nOverriding batch size from args: {args.batch_size}")
358
+ if hasattr(args, 'num_chunks') and args.num_chunks is not None:
359
+ metadata['num_chunks'] = args.num_chunks
360
+ print(f"\nOverriding num chunks from args: {args.num_chunks}")
361
+
362
+ return metadata
363
+
364
+ def load_models(args,metadata):
365
+ """Load all required models and extract metadata."""
366
+ print("\nLoading models...")
367
+
368
+ try:
369
+ # Load embeddings model
370
+ print("\nLoading embeddings model...")
371
+ embed_path = parse_model_path(args.embed)
372
+ print(f"Loading from: {embed_path}")
373
+ embed_model = load_model(embed_path)
374
+ print("Embeddings model loaded successfully")
375
+ metadata = load_metadata(embed_model,args)
376
+
377
+
378
+
379
+ # Load LM head model
380
+ print("\nLoading LM head model...")
381
+ lmhead_path = parse_model_path(args.lmhead)
382
+ print(f"Loading from: {lmhead_path}")
383
+ lmhead_model = load_model(lmhead_path)
384
+ print("LM head model loaded successfully")
385
+
386
+ # Parse FFN path and find chunks if needed
387
+ print("\nLoading FFN+PREFILL model(s)...")
388
+ ffn_path = parse_model_path(args.ffn)
389
+ chunk_no, total_chunks = parse_ffn_filename(ffn_path)
390
+
391
+ ffn_models = []
392
+ if chunk_no and total_chunks:
393
+ print(f"\nDetected chunked FFN+PREFILL model ({total_chunks} chunks)")
394
+ # Find and load all chunks
395
+ chunk_paths = find_all_chunks(ffn_path)
396
+ if len(chunk_paths) != total_chunks:
397
+ raise ValueError(f"Found {len(chunk_paths)} chunks but filename indicates {total_chunks} chunks")
398
+
399
+ for chunk_path in chunk_paths:
400
+ print(f"\nLoading FFN+PREFILL chunk: {Path(chunk_path).name}")
401
+ try:
402
+ # For chunked models, we need both infer and prefill functions
403
+ ffn_models.append({
404
+ 'infer': load_model(chunk_path, function_name='infer'),
405
+ 'prefill': load_model(chunk_path, function_name='prefill')
406
+ })
407
+ print("Chunk loaded successfully")
408
+ except Exception as e:
409
+ print(f"Error loading chunk {chunk_path}: {str(e)}")
410
+ raise
411
+ metadata = load_metadata(ffn_models[0],args)
412
+
413
+ else:
414
+ print("\nLoading single FFN model...")
415
+ ffn_models.append(load_model(ffn_path))
416
+ print("FFN model loaded successfully")
417
+
418
+ return embed_model, ffn_models, lmhead_model, metadata
419
+
420
+ except Exception as e:
421
+ print(f"\nError loading models: {str(e)}")
422
+ print("\nPlease ensure all model files exist and are accessible.")
423
+ print("Expected files:")
424
+ print(f" Embeddings: {args.embed}")
425
+ print(f" LM Head: {args.lmhead}")
426
+ print(f" FFN: {args.ffn}")
427
+ raise
428
+
429
+ # At the top of the file, make this a default path
430
+
431
+ def initialize_tokenizer(model_path=None):
432
+ """Initialize and configure the tokenizer."""
433
+ try:
434
+
435
+
436
+ tokenizer = AutoTokenizer.from_pretrained(
437
+ str(model_path),
438
+ use_fast=False,
439
+ trust_remote_code=True
440
+ )
441
+
442
+ print("\nTokenizer Configuration:")
443
+ print(f"Tokenizer type: {type(tokenizer)}")
444
+ print(f"Tokenizer name: {tokenizer.__class__.__name__}")
445
+ print(f"Vocabulary size: {len(tokenizer)}")
446
+ print(f"Model max length: {tokenizer.model_max_length}")
447
+
448
+ if tokenizer.pad_token is None:
449
+ tokenizer.pad_token = tokenizer.eos_token
450
+ tokenizer.pad_token_id = tokenizer.eos_token_id
451
+ print("Set PAD token to EOS token")
452
+
453
+ tokenizer.padding_side = "left"
454
+
455
+ print(f"\nSpecial Tokens:")
456
+ print(f"PAD token: '{tokenizer.pad_token}' (ID: {tokenizer.pad_token_id})")
457
+ print(f"EOS token: '{tokenizer.eos_token}' (ID: {tokenizer.eos_token_id})")
458
+ print(f"BOS token: '{tokenizer.bos_token}' (ID: {tokenizer.bos_token_id})")
459
+ print(f"UNK token: '{tokenizer.unk_token}' (ID: {tokenizer.unk_token_id})")
460
+
461
+ return tokenizer
462
+
463
+ except Exception as e:
464
+ print(f"\nError: Failed to load tokenizer from {model_path}")
465
+ print(f"Error details: {str(e)}")
466
+ print(f"Error type: {type(e)}")
467
+ print("\nThis code requires a Llama 3.2 model for chat template functionality.")
468
+ print("Please provide the path to a Llama 3.2 model directory.")
469
+ import traceback
470
+ traceback.print_exc()
471
+ raise
472
+
473
+
474
+
475
+ def make_causal_mask(length, start):
476
+ """Create causal attention mask."""
477
+ mask = np.full((1, 1, length, length), -np.inf, dtype=np.float16)
478
+ row_indices = np.arange(length).reshape(length, 1)
479
+ col_indices = np.arange(length).reshape(1, length)
480
+ mask[:, :, col_indices <= (row_indices + start)] = 0
481
+ return mask
482
+
483
+ def run_prefill(embed_model, ffn_models, input_ids, current_pos, context_length, batch_size, state, causal_mask):
484
+ """Run prefill on the input sequence."""
485
+ #print(f"[DEBUG] Running prefill from 0 to {current_pos}")
486
+
487
+ # Process in batches
488
+ batch_pos = 0
489
+ while batch_pos < current_pos:
490
+ batch_end = min(batch_pos + batch_size, current_pos)
491
+ current_batch_size = batch_end - batch_pos
492
+
493
+ #print(f"[DEBUG] Prefill batch {batch_pos}-{batch_end} (size={current_batch_size})")
494
+
495
+ # Get current batch
496
+ batch_input = input_ids[:, batch_pos:batch_end]
497
+
498
+ # Pad to full batch size
499
+ batch_input = F.pad(
500
+ batch_input,
501
+ (0, batch_size - current_batch_size),
502
+ value=0
503
+ )
504
+
505
+ # Generate position IDs for this batch
506
+ position_ids = torch.arange(batch_pos, batch_pos + batch_size, dtype=torch.int32)
507
+
508
+ # Use the pre-initialized causal mask and extract the batch portion
509
+ batch_causal_mask = causal_mask[:, :, batch_pos:batch_pos + batch_size, :]
510
+
511
+ # Run embeddings
512
+ hidden_states = torch.from_numpy(
513
+ embed_model.predict({'input_ids': batch_input.numpy()})['hidden_states']
514
+ )
515
+
516
+ # Run through FFN chunks
517
+ for ffn_model in ffn_models:
518
+ if isinstance(ffn_model, dict):
519
+ inputs = {
520
+ 'hidden_states': hidden_states.numpy(),
521
+ 'position_ids': position_ids.numpy(),
522
+ 'causal_mask': batch_causal_mask.numpy(),
523
+ 'current_pos': np.array([batch_pos], dtype=np.int32)
524
+ }
525
+ output = ffn_model['prefill'].predict(inputs, state)
526
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
527
+
528
+ batch_pos = batch_end
529
+
530
+ return torch.tensor([current_pos], dtype=torch.int32)
531
+
532
+ def generate_next_token(embed_model, ffn_models, lmhead_model, input_ids, pos, context_length, state, causal_mask, temperature=0.0):
533
+ """Generate the next token."""
534
+ # Get current token
535
+ current_token = input_ids[:, pos-1:pos]
536
+
537
+ # Run embeddings
538
+ hidden_states = torch.from_numpy(
539
+ embed_model.predict({'input_ids': current_token.numpy()})['hidden_states']
540
+ )
541
+
542
+ # Create masks
543
+ update_mask = torch.zeros((1, 1, context_length, 1), dtype=torch.float16)
544
+ update_mask[0, 0, pos-1, 0] = 1.0
545
+ position_ids = torch.tensor([pos-1], dtype=torch.int32)
546
+
547
+ # Use the pre-initialized causal mask and extract the single position portion
548
+ single_causal_mask = causal_mask[:, :, pos-1:pos, :]
549
+
550
+ # Run through FFN chunks
551
+ for ffn_model in ffn_models:
552
+ if isinstance(ffn_model, dict):
553
+ inputs = {
554
+ 'hidden_states': hidden_states.numpy(),
555
+ 'update_mask': update_mask.numpy(),
556
+ 'position_ids': position_ids.numpy(),
557
+ 'causal_mask': single_causal_mask.numpy(),
558
+ 'current_pos': position_ids.numpy()
559
+ }
560
+ output = ffn_model['infer'].predict(inputs, state)
561
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
562
+
563
+ # Run LM head and get next token
564
+ lm_output = lmhead_model.predict({'hidden_states': hidden_states.numpy()})
565
+
566
+ if 'logits1' in lm_output:
567
+ logits_parts = []
568
+ for i in range(1, 9):
569
+ key = f'logits{i}'
570
+ if key in lm_output:
571
+ logits_parts.append(torch.from_numpy(lm_output[key]))
572
+ logits = torch.cat(logits_parts, dim=-1)
573
+ else:
574
+ logits = torch.from_numpy(lm_output['output_logits'])
575
+
576
+ if temperature > 0:
577
+ logits = logits / temperature
578
+ probs = F.softmax(logits[0, -1, :], dim=-1)
579
+ next_token = torch.multinomial(probs, num_samples=1).item()
580
+ else:
581
+ next_token = torch.argmax(logits[0, -1, :]).item()
582
+
583
+ return next_token
584
+
585
+ def create_unified_state(ffn_models, context_length):
586
+ """Create unified KV cache state for transformer."""
587
+ if isinstance(ffn_models[0], dict):
588
+ # Use first FFN model's prefill function to create state
589
+ state = ffn_models[0]['prefill'].make_state()
590
+ print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
591
+ return state
592
+ else:
593
+ state = ffn_models[0].make_state()
594
+ print("\nCreated unified transformer state")
595
+ return state
596
+
597
+ def initialize_causal_mask(context_length):
598
+ """Initialize causal mask for transformer attention."""
599
+ causal_mask = make_causal_mask(context_length, 0)
600
+ causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
601
+ print(f"\nInitialized causal mask for context length {context_length}")
602
+ return causal_mask
603
+
604
+ def get_user_input():
605
+ """Get input from user, handling special key combinations."""
606
+ global THINKING_MODE
607
+ try:
608
+ import termios
609
+ import tty
610
+ import sys
611
+
612
+ def _getch():
613
+ fd = sys.stdin.fileno()
614
+ old_settings = termios.tcgetattr(fd)
615
+ try:
616
+ tty.setraw(sys.stdin.fileno())
617
+ ch = sys.stdin.read(1)
618
+ finally:
619
+ termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
620
+ return ch
621
+
622
+ buffer = []
623
+ while True:
624
+ char = _getch()
625
+
626
+ # Debug: print the character code
627
+ print(f"\nKey pressed: {repr(char)} (hex: {hex(ord(char))})")
628
+
629
+ # Check for Enter key
630
+ if char == '\r' or char == '\n':
631
+ print() # Move to next line
632
+ input_text = ''.join(buffer)
633
+ # Check if the command is /t
634
+ if input_text == '/t':
635
+ THINKING_MODE = not THINKING_MODE
636
+ print(f"Thinking mode {'ON' if THINKING_MODE else 'OFF'}")
637
+ buffer = [] # Clear buffer
638
+ print(f"\n{LIGHT_GREEN}You{' (thinking)' if THINKING_MODE else ''}:{RESET_COLOR}", end=' ', flush=True)
639
+ continue
640
+ return input_text
641
+
642
+ # Handle backspace
643
+ if char == '\x7f': # backspace
644
+ if buffer:
645
+ buffer.pop()
646
+ sys.stdout.write('\b \b') # Erase character
647
+ sys.stdout.flush()
648
+ continue
649
+
650
+ # Handle Ctrl-C
651
+ if char == '\x03': # Ctrl-C
652
+ print("^C")
653
+ raise KeyboardInterrupt
654
+
655
+ # Print character and add to buffer
656
+ sys.stdout.write(char)
657
+ sys.stdout.flush()
658
+ buffer.append(char)
659
+
660
+ except ImportError:
661
+ # Fallback for systems without termios
662
+ return input("> ")
663
+
664
+ def chat_loop(embed_model, ffn_models, lmhead_model, tokenizer, metadata, state, causal_mask, auto_prompt=None, warmup=False):
665
+ """Interactive chat loop."""
666
+ global THINKING_MODE
667
+ global DEBUG_LEVEL
668
+ context_length = metadata.get('context_length')
669
+ batch_size = metadata.get('batch_size', 64)
670
+
671
+ if not warmup:
672
+ print(f"\nUsing context length: {context_length}")
673
+ print("\nStarting chat session. Press Ctrl+D to exit.")
674
+ print("Type your message and press Enter to chat. Use /t to toggle thinking mode.")
675
+ print(f"Thinking mode is {'ON' if THINKING_MODE else 'OFF'}")
676
+
677
+ # Keep track of conversation history
678
+ conversation = []
679
+
680
+ try:
681
+ while True:
682
+ try:
683
+ if not warmup:
684
+ print(f"\n{LIGHT_GREEN}You{' (thinking)' if THINKING_MODE else ''}:{RESET_COLOR}", end=' ', flush=True)
685
+ if auto_prompt is not None:
686
+ user_input = auto_prompt
687
+ if not warmup:
688
+ print(user_input)
689
+ else:
690
+ user_input = input().strip()
691
+ except EOFError:
692
+ if not warmup:
693
+ print("\nExiting chat...")
694
+ break
695
+
696
+ if not user_input:
697
+ continue
698
+
699
+ # Handle /t command
700
+ if user_input == "/t":
701
+ THINKING_MODE = not THINKING_MODE
702
+ print(f"Thinking mode {'ON' if THINKING_MODE else 'OFF'}")
703
+ continue
704
+
705
+ # Add user message to conversation
706
+ conversation.append({"role": "user", "content": user_input})
707
+
708
+ # Format using chat template with full history
709
+ if THINKING_MODE:
710
+ # Add thinking prompt to system message
711
+ conversation_with_thinking = [{"role": "system", "content": THINKING_PROMPT}] + conversation
712
+ base_input_ids = tokenizer.apply_chat_template(
713
+ conversation_with_thinking,
714
+ return_tensors="pt",
715
+ add_generation_prompt=True
716
+ ).to(torch.int32)
717
+
718
+ # Print full prompt if debug level >= 1
719
+ if DEBUG_LEVEL >= 1 and not warmup:
720
+ print(f"\n{DARK_BLUE}Debug: Full prompt with thinking:{RESET_COLOR}")
721
+ print(tokenizer.decode(base_input_ids[0]))
722
+ else:
723
+ base_input_ids = tokenizer.apply_chat_template(
724
+ conversation,
725
+ return_tensors="pt",
726
+ add_generation_prompt=True
727
+ ).to(torch.int32)
728
+
729
+ # Print full prompt if debug level >= 1
730
+ if DEBUG_LEVEL >= 1 and not warmup:
731
+ print(f"\n{DARK_BLUE}Debug: Full prompt:{RESET_COLOR}")
732
+ print(tokenizer.decode(base_input_ids[0]))
733
+
734
+ # Check if we need to trim history
735
+ while base_input_ids.size(1) > context_length - 100: # Leave room for response
736
+ # Remove oldest message pair (user + assistant)
737
+ if len(conversation) > 2:
738
+ conversation = conversation[2:] # Remove oldest pair
739
+ base_input_ids = tokenizer.apply_chat_template(
740
+ conversation,
741
+ return_tensors="pt",
742
+ add_generation_prompt=True
743
+ ).to(torch.int32)
744
+ else:
745
+ # If only current message remains and still too long, truncate
746
+ base_input_ids = base_input_ids[:, -context_length//2:]
747
+ break
748
+
749
+ context_pos = base_input_ids.size(1)
750
+
751
+ # Pad sequence to context_size
752
+ input_ids = F.pad(
753
+ base_input_ids,
754
+ (0, context_length - context_pos),
755
+ value=0
756
+ )
757
+
758
+ if not warmup:
759
+ print(f"\n{LIGHT_BLUE}Assistant:{RESET_COLOR}", end=' ', flush=True)
760
+
761
+ # Initialize token printer and collect response
762
+ token_printer = TokenPrinter(tokenizer)
763
+ response_tokens = []
764
+ generation_start_time = time.time()
765
+
766
+ try:
767
+ # Run prefill on entire context
768
+ current_pos = run_prefill(
769
+ embed_model,
770
+ ffn_models,
771
+ input_ids,
772
+ context_pos,
773
+ context_length,
774
+ batch_size,
775
+ state,
776
+ causal_mask
777
+ )
778
+ #print(f"\n[DEBUG] After initial prefill - current_pos: {current_pos}")
779
+
780
+ # Generation loop
781
+ pos = context_pos
782
+ tokens_generated = 0
783
+ inference_start = time.time() # Start inference timing
784
+
785
+ while True:
786
+ # Check if we need to shift window
787
+ if pos >= context_length - 2:
788
+ # Calculate shift to maintain full batches
789
+ batch_size = metadata.get('batch_size', 64)
790
+ # Calculate max batches that fit in context
791
+ max_batches = context_length // batch_size
792
+ desired_batches = max(1, max_batches - 2) # Leave room for new tokens
793
+ new_size = min(desired_batches * batch_size, context_length - batch_size)
794
+
795
+ # Create shifted input_ids
796
+ tmp = torch.zeros((1, context_length), dtype=torch.int32)
797
+ tmp[:,0:new_size] = input_ids[:,pos-new_size:pos]
798
+ input_ids = tmp
799
+
800
+ # Reset state and run prefill
801
+ # keep the same state
802
+ #state = create_unified_state(ffn_models, context_length)
803
+ current_pos = run_prefill(
804
+ embed_model,
805
+ ffn_models,
806
+ input_ids,
807
+ new_size, # Prefill the entire shifted content
808
+ context_length,
809
+ batch_size,
810
+ state,
811
+ causal_mask
812
+ )
813
+
814
+ # Start generating from the next position
815
+ pos = new_size # Don't back up, continue from where we left off
816
+
817
+ #print(f"\n[DEBUG] After shift - next token will be at pos {pos}")
818
+ #print(f"[DEBUG] Context before next token: {tokenizer.decode(input_ids[0, pos-40:pos])}")
819
+
820
+ window_shifted = True
821
+
822
+ # Generate next token
823
+ next_token = generate_next_token(
824
+ embed_model,
825
+ ffn_models,
826
+ lmhead_model,
827
+ input_ids,
828
+ pos,
829
+ context_length,
830
+ state,
831
+ causal_mask
832
+ )
833
+
834
+ # Add token
835
+ input_ids[0, pos] = next_token
836
+ if not warmup:
837
+ token_printer.add_token(next_token)
838
+ token_printer.drain_buffer()
839
+ response_tokens.append(next_token)
840
+
841
+ pos += 1
842
+ tokens_generated += 1
843
+
844
+ # In warmup mode, limit tokens
845
+ if warmup and tokens_generated >= WARMUP_TOKEN_LIMIT:
846
+ break
847
+
848
+ if next_token == tokenizer.eos_token_id:
849
+ break
850
+
851
+ inference_time = time.time() - inference_start # Calculate inference time
852
+
853
+ # Add assistant response to conversation
854
+ response_text = token_printer.stop()
855
+ conversation.append({"role": "assistant", "content": response_text})
856
+
857
+ # Print stats only if not in warmup
858
+ if not warmup:
859
+ total_time = time.time() - generation_start_time
860
+ prefill_time = total_time - inference_time
861
+ inference_tokens_per_sec = len(response_tokens) / inference_time if inference_time > 0 else 0
862
+ prefill_ms = prefill_time * 1000
863
+ prefill_tokens_per_sec = context_pos / prefill_time if prefill_time > 0 else 0
864
+ print(f"{DARK_BLUE}{inference_tokens_per_sec:.1f} t/s, "
865
+ f"TTFT: {prefill_ms:.1f}ms ({prefill_tokens_per_sec:.1f} t/s), "
866
+ f"{len(response_tokens)} tokens{RESET_COLOR}")
867
+
868
+ if auto_prompt is not None:
869
+ break
870
+
871
+ except KeyboardInterrupt:
872
+ if not warmup:
873
+ print("\nGeneration interrupted")
874
+ token_printer.stop()
875
+ continue
876
+
877
+ except Exception as e:
878
+ if not warmup:
879
+ print(f"\nError in chat loop: {str(e)}")
880
+ import traceback
881
+ traceback.print_exc()
882
+
883
+ def main():
884
+ args = parse_args()
885
+ global DEBUG_LEVEL
886
+ DEBUG_LEVEL = args.debug_level
887
+
888
+ # Convert directory to absolute path
889
+ model_dir = Path(args.d).resolve()
890
+ if not model_dir.exists():
891
+ print(f"\nError: Model directory not found: {model_dir}")
892
+ return 1
893
+
894
+ print(f"\nUsing model directory: {model_dir}")
895
+ print(f"Context length: {args.context_length}")
896
+
897
+ try:
898
+ # Update paths to be relative to model directory
899
+ args.embed = str(model_dir / args.embed)
900
+ args.ffn = str(model_dir / args.ffn)
901
+ args.lmhead = str(model_dir / args.lmhead)
902
+
903
+ # Handle tokenizer path separately since it's not relative to model_dir
904
+ if args.tokenizer is None:
905
+ args.tokenizer = str(model_dir)
906
+
907
+ if not Path(args.tokenizer).exists():
908
+ print(f"\nError: Tokenizer directory not found: {args.tokenizer}")
909
+ return 1
910
+
911
+ args.tokenizer = str(Path(args.tokenizer).resolve()) # Convert to absolute path
912
+ print(f"Using tokenizer path: {args.tokenizer}")
913
+
914
+ metadata = {}
915
+ # Load models and extract metadata
916
+ embed_model, ffn_models, lmhead_model, metadata = load_models(args,metadata)
917
+
918
+ print(f"\nMetadata befor args.context_length: {metadata}")
919
+
920
+ # Override context length from command line if provided
921
+ if args.context_length is not None:
922
+ metadata['context_length'] = args.context_length
923
+ metadata['state_length'] = args.context_length # Also update state_length
924
+ print(f"\nOverriding context length from command line: {args.context_length}")
925
+
926
+ print(f"\nMetadata after load_models: {metadata}")
927
+
928
+ # Load tokenizer with resolved path
929
+ tokenizer = initialize_tokenizer(args.tokenizer)
930
+ if tokenizer is None:
931
+ raise RuntimeError("Failed to initialize tokenizer")
932
+
933
+ # Create unified state once
934
+ state = create_unified_state(ffn_models, metadata['context_length'])
935
+
936
+ # Initialize causal mask once
937
+ causal_mask = initialize_causal_mask(metadata['context_length'])
938
+
939
+ # Warmup runs to prevent Python GIL issues with CoreML !
940
+ if not args.nw:
941
+ for i in range(2):
942
+ chat_loop(
943
+ embed_model=embed_model,
944
+ ffn_models=ffn_models,
945
+ lmhead_model=lmhead_model,
946
+ tokenizer=tokenizer,
947
+ metadata=metadata,
948
+ state=state, # Pass the state
949
+ causal_mask=causal_mask, # Pass the causal mask
950
+ warmup=True,
951
+ auto_prompt="who are you?"
952
+ )
953
+
954
+ # Main run
955
+ chat_loop(
956
+ embed_model=embed_model,
957
+ ffn_models=ffn_models,
958
+ lmhead_model=lmhead_model,
959
+ tokenizer=tokenizer,
960
+ metadata=metadata,
961
+ state=state, # Pass the state
962
+ causal_mask=causal_mask, # Pass the causal mask
963
+ warmup=False,
964
+ auto_prompt=args.prompt
965
+ )
966
+
967
+ except Exception as e:
968
+ print(f"\nError: {str(e)}")
969
+ import traceback
970
+ traceback.print_exc()
971
+ return 1
972
+
973
+ return 0
974
+
975
+ if __name__ == "__main__":
976
+ exit(main())
config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "tokenizer_class": "LlamaTokenizer",
3
+ "model_type": "llama"
4
+ }
meta.yaml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model_info:
2
+ name: anemll-Llama-3.1-Nemotron-Nano-8B-v1-ctx512
3
+ version: 0.3.0
4
+ description: |
5
+ Demonstarates running Llama-3.1-Nemotron-Nano-8B-v1 on Apple Neural Engine
6
+ Context length: 512
7
+ Batch size: 64
8
+ Chunks: 16
9
+ license: MIT
10
+ author: Anemll
11
+ framework: Core ML
12
+ language: Python
13
+ parameters:
14
+ context_length: 512
15
+ batch_size: 64
16
+ lut_embeddings: none
17
+ lut_ffn: none
18
+ lut_lmhead: none
19
+ num_chunks: 16
20
+ model_prefix: nemo_
21
+ embeddings: nemo__embeddings.mlmodelc
22
+ lm_head: nemo__lm_head.mlmodelc
23
+ ffn: nemo__FFN_PF.mlmodelc
nemo__FFN_PF_chunk_01of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edcdbad76faf4ee458a263f3a1aa8b820e0d0ac620bb0c2c5c12cc11be308af5
3
+ size 694852699
nemo__FFN_PF_chunk_02of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c1da673a44893fa22cff3f4bb99d346d2d5c27d2830b9006f776e904821e62d
3
+ size 692340458
nemo__FFN_PF_chunk_03of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7510980bd7a1cb45dafe1a17d8702bf3555410c1dd49c5c4801241e61da7801
3
+ size 692351023
nemo__FFN_PF_chunk_04of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33edbb186cdd228ea5f148edeb5c71814b2626c17b36fb2bf3ea05bf40bc9833
3
+ size 692319786
nemo__FFN_PF_chunk_05of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5b56cbdc705a7e7eefcca36e19161972409546471c0c1023c88d6a0221a5b7c
3
+ size 692485163
nemo__FFN_PF_chunk_06of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c87dab3c4386b599f4962b03516a24146bcbabaf905230df385dbd21951f5c35
3
+ size 692446777
nemo__FFN_PF_chunk_07of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4328e1068b54482fee75bcca4d767b9764481f2ac2a4b32efefcd412eb5054bc
3
+ size 692547980
nemo__FFN_PF_chunk_08of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89285c2ac7bd614edd851db8dbee2289195b09eca211f96ac399a97edcbfc349
3
+ size 692524197
nemo__FFN_PF_chunk_09of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69984aec9d126550479685f93c02222ee3f621cab5d5bf618175cd4467d30a2f
3
+ size 692290432
nemo__FFN_PF_chunk_10of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb6fed8348c34a4dcb94e9e6274a3d4ce13c1cf55c4ea8064201a0f7e60e9a1f
3
+ size 691996314
nemo__FFN_PF_chunk_11of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:347ef4dfb8720c5a80ec921171b4d0c7ec4af078c381f4cd0f87a79c557f110d
3
+ size 691894392
nemo__FFN_PF_chunk_12of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75ec0763472248d6abb9eb7ff50ff0bb17e76918d8cd0cf90d6b169c31d7c8c2
3
+ size 691735112
nemo__FFN_PF_chunk_13of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10f9728404778e4a13dc7f69976acf75630074bb9baec63bd06b2d4ca5a18979
3
+ size 691731488
nemo__FFN_PF_chunk_14of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fb577e954585168f14b0ed51387c3aaa3fc5248b03f1a096f4af57ecad07ed8
3
+ size 691858182
nemo__FFN_PF_chunk_15of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad7cc2817d8089078e15cae9e83aeffd0813a159af79a025214de1deba6946ec
3
+ size 692063899
nemo__FFN_PF_chunk_16of16.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:744ca5f50e7bc118d7a7781cd163019bc78490a84d0fbf727a61faef21a55912
3
+ size 692765141
nemo__embeddings.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3f84f3a73085f9046ab5747964c27540052a25e08b5c0f43d955558c9164128
3
+ size 807306441
nemo__lm_head.mlmodelc.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2707f57a46d48a972c0b4bc1a3b56196c6f451d0f0a65a2c6a3443746ae080e7
3
+ size 807604851
prefill.py ADDED
@@ -0,0 +1,644 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # prefill.py
3
+ # Copyright (c) 2025 Anemll
4
+ # Licensed under the MIT License
5
+
6
+ import argparse
7
+ import os
8
+ import re
9
+ import glob
10
+ from pathlib import Path
11
+ import coremltools as ct
12
+ from transformers import AutoTokenizer
13
+ import torch
14
+ import torch.nn.functional as F
15
+ import numpy as np
16
+ import time
17
+ import yaml
18
+ import sys
19
+
20
+ # ANSI color codes
21
+ LIGHT_BLUE = "\033[94m"
22
+ DARK_BLUE = "\033[34m"
23
+ LIGHT_GREEN = "\033[92m"
24
+ RESET_COLOR = "\033[0m"
25
+
26
+ def parse_model_path(path):
27
+ """Parse model path and return full path with .mlmodelc or .mlpackage extension."""
28
+ path = Path(path)
29
+
30
+ # If path exists exactly as specified, return it
31
+ if path.exists():
32
+ return str(path)
33
+
34
+ # Try with both extensions
35
+ candidates = [
36
+ path, # Original path
37
+ path.with_suffix('.mlmodelc'), # With .mlmodelc
38
+ path.with_suffix('.mlpackage'), # With .mlpackage
39
+ Path(str(path) + '.mlmodelc'), # Handle case where extension is included
40
+ Path(str(path) + '.mlpackage')
41
+ ]
42
+
43
+ # Try all possible paths
44
+ for candidate in candidates:
45
+ if candidate.exists():
46
+ print(f"Found model at: {candidate}")
47
+ return str(candidate)
48
+
49
+ # If we get here, no valid path was found
50
+ print("\nError: Model not found. Tried following paths:")
51
+ for candidate in candidates:
52
+ print(f" {candidate}")
53
+ raise FileNotFoundError(f"Model not found: {path}")
54
+
55
+ def parse_ffn_filename(path):
56
+ """Parse FFN model filename to extract chunk information."""
57
+ path = Path(path)
58
+ pattern = r'FFN_PF.*_chunk_(\d+)of(\d+)'
59
+ match = re.search(pattern, path.name)
60
+
61
+ if match:
62
+ current_chunk = int(match.group(1))
63
+ total_chunks = int(match.group(2))
64
+ return current_chunk, total_chunks
65
+ return None, None
66
+
67
+ def find_all_chunks(base_path):
68
+ """Find all chunk files matching the base FFN path pattern."""
69
+ path = Path(base_path)
70
+ pattern = re.sub(r'_chunk_\d+of\d+', '_chunk_*', str(path))
71
+ return sorted(glob.glob(pattern))
72
+
73
+ def load_model(path, function_name=None):
74
+ """Load a CoreML model, handling both .mlmodelc and .mlpackage formats."""
75
+ path = Path(path)
76
+ compute_unit = ct.ComputeUnit.CPU_AND_NE
77
+
78
+ try:
79
+ if path.suffix == '.mlmodelc':
80
+ # For compiled models (.mlmodelc), use CompiledMLModel
81
+ if function_name:
82
+ return ct.models.CompiledMLModel(str(path), compute_unit, function_name=function_name)
83
+ else:
84
+ return ct.models.CompiledMLModel(str(path), compute_unit)
85
+ else:
86
+ # For packages (.mlpackage)
87
+ if function_name:
88
+ return ct.models.MLModel(str(path), function_name=function_name)
89
+ else:
90
+ return ct.models.MLModel(str(path))
91
+
92
+ except RuntimeError as e:
93
+ if "valid manifest does not exist" in str(e):
94
+ print(f"\nError: Could not load compiled model at {path}")
95
+ print("This might be because:")
96
+ print("1. The model is not properly compiled")
97
+ print("2. The model was compiled for a different OS version")
98
+ print("3. The model needs to be recompiled")
99
+ print("\nTry using the .mlpackage version instead, or recompile the model.")
100
+ raise
101
+
102
+ def load_metadata(model, args):
103
+ # Extract metadata and config parameters
104
+ metadata = {}
105
+ if hasattr(model, 'user_defined_metadata'):
106
+ meta = model.user_defined_metadata
107
+
108
+ # Extract key parameters with defaults
109
+ metadata['context_length'] = int(meta.get('com.anemll.context_length', 512))
110
+ metadata['state_length'] = int(meta.get('com.anemll.state_length', metadata['context_length']))
111
+ metadata['batch_size'] = int(meta.get('com.anemll.batch_size', 64))
112
+ metadata['lut_bits'] = int(meta.get('com.anemll.lut_bits', 0))
113
+ metadata['num_chunks'] = int(meta.get('com.anemll.num_chunks', 1))
114
+
115
+ print("\nExtracted Parameters:")
116
+ print(f" Context Length: {metadata['context_length']}")
117
+ print(f" State Length: {metadata['state_length']}")
118
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
119
+ print(f" LUT Bits: {metadata['lut_bits']}")
120
+ print(f" Number of Chunks: {metadata['num_chunks']}")
121
+ else:
122
+ print("\nWarning: No metadata found in model")
123
+
124
+ # Check if model directory name contains context length pattern (ctxXXX)
125
+ ctx_len = 512
126
+ if args.context_length is None:
127
+ import re
128
+ ctx_match = re.search(r'ctx(\d+)', str(args.d))
129
+ if ctx_match:
130
+ ctx_len0 = int(ctx_match.group(1))
131
+ if 512 <= ctx_len0 <= 8096:
132
+ ctx_len = ctx_len0
133
+ print(f"\nDetected context length {ctx_len} from directory name")
134
+ else:
135
+ print(f"\nWarning: No context length found in directory, using default {ctx_len}")
136
+ else:
137
+ ctx_len = args.context_length
138
+
139
+ # Use defaults or values from args
140
+ metadata['context_length'] = ctx_len
141
+ metadata['state_length'] = ctx_len
142
+ # Get batch size from args or use default
143
+ metadata['batch_size'] = getattr(args, 'batch_size', 64)
144
+ metadata['lut_bits'] = 4
145
+ metadata['num_chunks'] = getattr(args, 'num_chunks', 4)
146
+ print("\nUsing parameters:")
147
+ print(f" Context Length: {metadata['context_length']}")
148
+ print(f" State Length: {metadata['state_length']}")
149
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
150
+ print(f" LUT Bits: {metadata['lut_bits']}")
151
+ print(f" Number of Chunks: {metadata['num_chunks']}")
152
+
153
+ # Override with values from args if they exist
154
+ if hasattr(args, 'batch_size') and args.batch_size is not None:
155
+ metadata['batch_size'] = args.batch_size
156
+ print(f"\nOverriding batch size from args: {args.batch_size}")
157
+ if hasattr(args, 'num_chunks') and args.num_chunks is not None:
158
+ metadata['num_chunks'] = args.num_chunks
159
+ print(f"\nOverriding num chunks from args: {args.num_chunks}")
160
+
161
+ return metadata
162
+
163
+ def load_models(args, metadata):
164
+ """Load all required models and extract metadata."""
165
+ print("\nLoading models...")
166
+
167
+ try:
168
+ # Load embeddings model
169
+ print("\nLoading embeddings model...")
170
+ embed_path = parse_model_path(args.embed)
171
+ print(f"Loading from: {embed_path}")
172
+ embed_model = load_model(embed_path)
173
+ print("Embeddings model loaded successfully")
174
+ metadata = load_metadata(embed_model, args)
175
+
176
+ # Load FFN model(s)
177
+ print("\nLoading PREFILL functionality only...")
178
+ ffn_path = parse_model_path(args.ffn)
179
+ chunk_no, total_chunks = parse_ffn_filename(ffn_path)
180
+
181
+ ffn_models = []
182
+ if chunk_no and total_chunks:
183
+ print(f"\nDetected chunked model with {total_chunks} chunks")
184
+ # Find and load all chunks
185
+ chunk_paths = find_all_chunks(ffn_path)
186
+ if len(chunk_paths) != total_chunks:
187
+ raise ValueError(f"Found {len(chunk_paths)} chunks but filename indicates {total_chunks} chunks")
188
+
189
+ for chunk_path in chunk_paths:
190
+ print(f"\nLoading PREFILL function from chunk: {Path(chunk_path).name}")
191
+ try:
192
+ # For prefill testing, we only need the prefill function
193
+ prefill_model = load_model(chunk_path, function_name='prefill')
194
+ ffn_models.append(prefill_model)
195
+ print("Chunk loaded successfully (prefill only)")
196
+ except Exception as e:
197
+ print(f"Error loading chunk {chunk_path}: {str(e)}")
198
+ raise
199
+ metadata = load_metadata(ffn_models[0], args)
200
+ else:
201
+ print("\nLoading single model (prefill functionality only)...")
202
+ ffn_models.append(load_model(ffn_path))
203
+ print("Model loaded successfully")
204
+
205
+ return embed_model, ffn_models, metadata
206
+
207
+ except Exception as e:
208
+ print(f"\nError loading models: {str(e)}")
209
+ print("\nPlease ensure all model files exist and are accessible.")
210
+ print("Expected files:")
211
+ print(f" Embeddings: {args.embed}")
212
+ print(f" FFN: {args.ffn}")
213
+ raise
214
+
215
+ def initialize_tokenizer(model_path=None):
216
+ """Initialize and configure the tokenizer."""
217
+ try:
218
+ tokenizer = AutoTokenizer.from_pretrained(
219
+ str(model_path),
220
+ use_fast=False,
221
+ trust_remote_code=True
222
+ )
223
+
224
+ print("\nTokenizer Configuration:")
225
+ print(f"Tokenizer type: {type(tokenizer)}")
226
+ print(f"Tokenizer name: {tokenizer.__class__.__name__}")
227
+ print(f"Vocabulary size: {len(tokenizer)}")
228
+
229
+ if tokenizer.pad_token is None:
230
+ tokenizer.pad_token = tokenizer.eos_token
231
+ tokenizer.pad_token_id = tokenizer.eos_token_id
232
+ print("Set PAD token to EOS token")
233
+
234
+ tokenizer.padding_side = "left"
235
+
236
+ return tokenizer
237
+
238
+ except Exception as e:
239
+ print(f"\nError: Failed to load tokenizer from {model_path}")
240
+ print(f"Error details: {str(e)}")
241
+ raise
242
+
243
+ def make_causal_mask(length, start):
244
+ """Create causal attention mask."""
245
+ mask = np.full((1, 1, length, length), -np.inf, dtype=np.float16)
246
+ row_indices = np.arange(length).reshape(length, 1)
247
+ col_indices = np.arange(length).reshape(1, length)
248
+ mask[:, :, col_indices <= (row_indices + start)] = 0
249
+ return mask
250
+
251
+ def initialize_causal_mask(context_length):
252
+ """Initialize causal mask for transformer attention."""
253
+ causal_mask = make_causal_mask(context_length, 0)
254
+ causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
255
+ print(f"\nInitialized causal mask for context length {context_length}")
256
+ return causal_mask
257
+
258
+ def run_prefill(embed_model, ffn_models, input_ids, context_pos, context_length, batch_size=64, state=None, causal_mask=None):
259
+ """Run prefill on the input sequence."""
260
+ # Use provided causal mask or create one if not provided
261
+ if causal_mask is None:
262
+ causal_mask = make_causal_mask(context_length, 0)
263
+ causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
264
+
265
+ # Process in batches
266
+ batch_pos = 0
267
+ while batch_pos < context_pos:
268
+ batch_end = min(batch_pos + batch_size, context_pos)
269
+ current_batch_size = batch_end - batch_pos
270
+
271
+ # Get current batch
272
+ batch_input = input_ids[:, batch_pos:batch_end]
273
+
274
+ # Always pad to full batch size for prefill
275
+ batch_input = F.pad(
276
+ batch_input,
277
+ (0, batch_size - current_batch_size),
278
+ value=0
279
+ )
280
+
281
+ # Generate position IDs for full batch size
282
+ position_ids = torch.arange(batch_size, dtype=torch.int32)
283
+ batch_causal_mask = causal_mask[:, :, :batch_size, :]
284
+
285
+ # Run embeddings with proper batch size
286
+ hidden_states = torch.from_numpy(
287
+ embed_model.predict({
288
+ 'input_ids': batch_input.numpy(),
289
+ 'batch_size': np.array([batch_size], dtype=np.int32)
290
+ })['hidden_states']
291
+ )
292
+
293
+ # Run through FFN chunks with state
294
+ for ffn_model in ffn_models:
295
+ # Handle both direct model and dictionary formats
296
+ if isinstance(ffn_model, dict) and 'prefill' in ffn_model:
297
+ # For backward compatibility with dictionary format
298
+ prefill_model = ffn_model['prefill']
299
+ else:
300
+ # Direct access for models loaded with function_name='prefill'
301
+ prefill_model = ffn_model
302
+
303
+ inputs = {
304
+ 'hidden_states': hidden_states.numpy(),
305
+ 'position_ids': position_ids.numpy(),
306
+ 'causal_mask': batch_causal_mask.numpy(),
307
+ 'current_pos': np.array([batch_pos], dtype=np.int32)
308
+ }
309
+ output = prefill_model.predict(inputs, state)
310
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
311
+
312
+ batch_pos = batch_end
313
+
314
+ return torch.tensor([context_pos], dtype=torch.int32)
315
+
316
+ def create_unified_state(ffn_models, context_length):
317
+ """Create unified KV cache state for transformer."""
318
+ if hasattr(ffn_models[0], 'make_state'):
319
+ # Direct access for models loaded with 'prefill' function_name
320
+ state = ffn_models[0].make_state()
321
+ print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
322
+ return state
323
+ else:
324
+ # Fallback for dictionary-based models (for backward compatibility)
325
+ if isinstance(ffn_models[0], dict) and 'prefill' in ffn_models[0]:
326
+ state = ffn_models[0]['prefill'].make_state()
327
+ print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
328
+ return state
329
+ else:
330
+ state = ffn_models[0].make_state()
331
+ print("\nCreated unified transformer state")
332
+ return state
333
+
334
+ def test_prefill_speed(embed_model, ffn_models, tokenizer, batch_size, context_length, num_test_tokens, num_runs=20, test_single_chunk=True):
335
+ """Test prefill speed with sample token sequences."""
336
+ print(f"\n{LIGHT_GREEN}Testing prefill speed for {num_test_tokens} tokens (using internal batch size {batch_size}){RESET_COLOR}")
337
+ print(f"Running {num_runs} iterations for warmup and measurement")
338
+
339
+ # Create sample input sequence of exactly num_test_tokens tokens
340
+ sample_text = "This is a test sequence. " * ((num_test_tokens + 4) // 5) # Ensure enough text
341
+ input_ids = tokenizer(sample_text, return_tensors="pt").input_ids.to(torch.int32)
342
+
343
+ # Trim or pad to exactly num_test_tokens tokens
344
+ if input_ids.size(1) > num_test_tokens:
345
+ input_ids = input_ids[:, :num_test_tokens]
346
+ elif input_ids.size(1) < num_test_tokens:
347
+ pad_length = num_test_tokens - input_ids.size(1)
348
+ input_ids = F.pad(input_ids, (0, pad_length), value=tokenizer.pad_token_id)
349
+
350
+ print(f"Sample input sequence length: {input_ids.size(1)} tokens")
351
+
352
+ # Test with all chunks first
353
+ print(f"\n{LIGHT_BLUE}Testing with all chunks ({len(ffn_models)} chunks){RESET_COLOR}")
354
+
355
+ # Create unified state
356
+ state_all_chunks = create_unified_state(ffn_models, context_length)
357
+
358
+ # Initialize causal mask
359
+ causal_mask = initialize_causal_mask(context_length)
360
+
361
+ # Run prefill multiple times for warmup and testing
362
+ all_chunks_times = []
363
+ for i in range(num_runs):
364
+ # Reset state for each run
365
+ if i == 0:
366
+ print("\nStarting warmup runs...")
367
+ elif i == num_runs // 2:
368
+ print("\nWarmup complete, starting measurement runs...")
369
+
370
+ start_time = time.time()
371
+
372
+ # Run prefill
373
+ run_prefill(
374
+ embed_model,
375
+ ffn_models,
376
+ input_ids,
377
+ input_ids.size(1), # context_pos
378
+ context_length,
379
+ batch_size, # Internal batching within run_prefill
380
+ state_all_chunks,
381
+ causal_mask
382
+ )
383
+
384
+ elapsed = time.time() - start_time
385
+ all_chunks_times.append(elapsed)
386
+
387
+ # Print progress
388
+ if i < num_runs // 2: # Warmup phase
389
+ print(f"Warmup run {i+1}/{num_runs//2}: {elapsed:.4f}s ({batch_size/elapsed:.1f} tokens/s)")
390
+ else: # Measurement phase
391
+ print(f"Run {i+1-num_runs//2}/{num_runs//2}: {elapsed:.4f}s ({batch_size/elapsed:.1f} tokens/s)")
392
+
393
+ # Calculate and report statistics for all chunks (excluding warmup runs)
394
+ all_chunks_measurement_times = all_chunks_times[num_runs // 2:]
395
+ all_chunks_avg_time = sum(all_chunks_measurement_times) / len(all_chunks_measurement_times)
396
+ all_chunks_min_time = min(all_chunks_measurement_times)
397
+ all_chunks_max_time = max(all_chunks_measurement_times)
398
+ all_chunks_tokens_per_sec = num_test_tokens / all_chunks_avg_time # Use num_test_tokens for speed calculation
399
+
400
+ print(f"\n{LIGHT_BLUE}All Chunks Prefill Speed Results:{RESET_COLOR}")
401
+ print(f"Number of Chunks: {len(ffn_models)}")
402
+ print(f"Test Tokens: {num_test_tokens} tokens")
403
+ print(f"Internal Batch Size: {batch_size} tokens")
404
+ print(f"Context Size: {context_length} tokens")
405
+ print(f"Average Time: {all_chunks_avg_time:.4f}s")
406
+ print(f"Min Time: {all_chunks_min_time:.4f}s")
407
+ print(f"Max Time: {all_chunks_max_time:.4f}s")
408
+ print(f"Average Speed: {all_chunks_tokens_per_sec:.1f} tokens/second")
409
+ print(f"Best Speed: {num_test_tokens / all_chunks_min_time:.1f} tokens/second") # Use num_test_tokens
410
+
411
+ # Test with single chunk if requested and if multiple chunks exist
412
+ single_chunk_tokens_per_sec = 0
413
+ if test_single_chunk and len(ffn_models) > 1:
414
+ print(f"\n{LIGHT_BLUE}Testing with single chunk (first chunk only){RESET_COLOR}")
415
+
416
+ # Create a list with only the first chunk
417
+ single_chunk_model = [ffn_models[0]]
418
+
419
+ # Create unified state for single chunk
420
+ state_single_chunk = create_unified_state(single_chunk_model, context_length)
421
+
422
+ # Run prefill multiple times for single chunk
423
+ single_chunk_times = []
424
+ for i in range(num_runs):
425
+ if i == 0:
426
+ print("\nStarting single chunk warmup runs...")
427
+ elif i == num_runs // 2:
428
+ print("\nSingle chunk warmup complete, starting measurement runs...")
429
+
430
+ start_time = time.time()
431
+
432
+ # Run prefill with single chunk
433
+ run_prefill(
434
+ embed_model,
435
+ single_chunk_model,
436
+ input_ids,
437
+ input_ids.size(1), # context_pos
438
+ context_length,
439
+ batch_size, # Internal batching within run_prefill
440
+ state_single_chunk,
441
+ causal_mask
442
+ )
443
+
444
+ elapsed = time.time() - start_time
445
+ single_chunk_times.append(elapsed)
446
+
447
+ # Print progress
448
+ if i < num_runs // 2: # Warmup phase
449
+ print(f"Single chunk warmup run {i+1}/{num_runs//2}: {elapsed:.4f}s ({batch_size/elapsed:.1f} tokens/s)")
450
+ else: # Measurement phase
451
+ print(f"Single chunk run {i+1-num_runs//2}/{num_runs//2}: {elapsed:.4f}s ({batch_size/elapsed:.1f} tokens/s)")
452
+
453
+ # Calculate and report statistics for single chunk
454
+ single_chunk_measurement_times = single_chunk_times[num_runs // 2:]
455
+ single_chunk_avg_time = sum(single_chunk_measurement_times) / len(single_chunk_measurement_times)
456
+ single_chunk_min_time = min(single_chunk_measurement_times)
457
+ single_chunk_max_time = max(single_chunk_measurement_times)
458
+ single_chunk_tokens_per_sec = num_test_tokens / single_chunk_avg_time # Use num_test_tokens
459
+
460
+ print(f"\n{LIGHT_BLUE}Single Chunk Prefill Speed Results:{RESET_COLOR}")
461
+ print(f"Test Tokens: {num_test_tokens} tokens")
462
+ print(f"Internal Batch Size: {batch_size} tokens")
463
+ print(f"Context Size: {context_length} tokens")
464
+ print(f"Average Time: {single_chunk_avg_time:.4f}s")
465
+ print(f"Min Time: {single_chunk_min_time:.4f}s")
466
+ print(f"Max Time: {single_chunk_max_time:.4f}s")
467
+ print(f"Average Speed: {single_chunk_tokens_per_sec:.1f} tokens/second")
468
+ print(f"Best Speed: {num_test_tokens / single_chunk_min_time:.1f} tokens/second") # Use num_test_tokens
469
+
470
+ # Calculate overhead per chunk
471
+ if len(ffn_models) > 1:
472
+ chunk_overhead = (all_chunks_avg_time - single_chunk_avg_time) / (len(ffn_models) - 1)
473
+ print(f"\n{LIGHT_GREEN}Chunk Overhead Analysis:{RESET_COLOR}")
474
+ print(f"Single Chunk Time: {single_chunk_avg_time:.4f}s")
475
+ print(f"All Chunks Time ({len(ffn_models)} chunks): {all_chunks_avg_time:.4f}s")
476
+ print(f"Additional Time Per Chunk: {chunk_overhead:.4f}s")
477
+ print(f"Overhead Percentage: {(all_chunks_avg_time/single_chunk_avg_time - 1)*100:.1f}%")
478
+
479
+ return all_chunks_tokens_per_sec, single_chunk_tokens_per_sec
480
+
481
+ def parse_args():
482
+ parser = argparse.ArgumentParser(description='Test prefill speed with CoreML LLaMA models (c) 2025 Anemll')
483
+
484
+ # Add meta.yaml option
485
+ parser.add_argument('--meta', type=str, help='Path to meta.yaml to load all parameters')
486
+
487
+ # Model paths
488
+ parser.add_argument('--d', '--dir', type=str, default='.',
489
+ help='Directory containing model files (default: current directory)')
490
+ parser.add_argument('--embed', type=str, required=False,
491
+ help='Path to embeddings model (relative to --dir)')
492
+ parser.add_argument('--ffn', type=str, required=False,
493
+ help='Path to FFN model (can be chunked, relative to --dir)')
494
+ parser.add_argument('--tokenizer', type=str, required=False,
495
+ help='Path to tokenizer')
496
+
497
+ # Test configuration
498
+ parser.add_argument('--batch-size', type=int,
499
+ help='Batch size for prefill test (default: 64)')
500
+ parser.add_argument('--runs', type=int, default=20,
501
+ help='Number of test runs (default: 20)')
502
+ parser.add_argument('--context-length', type=int,
503
+ help='Context length for the model')
504
+ parser.add_argument('--no-single-chunk', action='store_true',
505
+ help='Disable single chunk testing')
506
+ parser.add_argument('--test-tokens', type=int,
507
+ help='Number of tokens to use for the speed test (default: batch_size)')
508
+ parser.add_argument('--test-full-context', action='store_true',
509
+ help='Test prefill speed using the full context length (overrides --test-tokens)')
510
+
511
+ args = parser.parse_args()
512
+
513
+ # If meta.yaml is provided, load parameters from it
514
+ if args.meta:
515
+ try:
516
+ with open(args.meta, 'r') as f:
517
+ meta = yaml.safe_load(f)
518
+ params = meta['model_info']['parameters']
519
+
520
+ # Set model directory to meta.yaml directory if not specified
521
+ if not args.d or args.d == '.':
522
+ args.d = str(Path(args.meta).parent)
523
+
524
+ # Build model paths based on parameters
525
+ prefix = params.get('model_prefix', 'llama')
526
+ lut_ffn = f"_lut{params['lut_ffn']}" if params['lut_ffn'] != 'none' else ''
527
+ lut_embeddings = f"_lut{params['lut_embeddings']}" if params['lut_embeddings'] != 'none' else ''
528
+ num_chunks = int(params['num_chunks'])
529
+
530
+ # Set model paths if not specified
531
+ if not args.embed:
532
+ args.embed = f'{prefix}_embeddings{lut_embeddings}'
533
+ if not args.ffn:
534
+ args.ffn = f'{prefix}_FFN_PF{lut_ffn}_chunk_01of{num_chunks:02d}'
535
+ if not args.tokenizer:
536
+ args.tokenizer = args.d
537
+
538
+ # Set other parameters if not overridden by command line
539
+ if args.context_length is None:
540
+ args.context_length = int(params['context_length'])
541
+ if args.batch_size is None:
542
+ args.batch_size = int(params['batch_size'])
543
+ args.num_chunks = num_chunks
544
+
545
+ print(f"\nLoaded parameters from {args.meta}:")
546
+ print(f" Context Length: {args.context_length}")
547
+ print(f" Batch Size: {args.batch_size}")
548
+ print(f" Num Chunks: {args.num_chunks}")
549
+ print(f" Models Directory: {args.d}")
550
+ print(f" Embeddings: {args.embed}")
551
+ print(f" FFN: {args.ffn}")
552
+
553
+ except Exception as e:
554
+ print(f"\nError loading meta.yaml: {str(e)}")
555
+ sys.exit(1)
556
+
557
+ return args
558
+
559
+ def main():
560
+ args = parse_args()
561
+
562
+ # Use default batch size if not specified
563
+ if args.batch_size is None:
564
+ args.batch_size = 64
565
+ print(f"\nUsing default batch size: {args.batch_size}")
566
+
567
+ # Convert directory to absolute path
568
+ model_dir = Path(args.d).resolve()
569
+ if not model_dir.exists():
570
+ print(f"\nError: Model directory not found: {model_dir}")
571
+ return 1
572
+
573
+ print(f"\nUsing model directory: {model_dir}")
574
+
575
+ try:
576
+ # Update paths to be relative to model directory
577
+ args.embed = str(model_dir / args.embed)
578
+ args.ffn = str(model_dir / args.ffn)
579
+
580
+ # Handle tokenizer path separately
581
+ if args.tokenizer is None:
582
+ args.tokenizer = str(model_dir)
583
+
584
+ if not Path(args.tokenizer).exists():
585
+ print(f"\nError: Tokenizer directory not found: {args.tokenizer}")
586
+ return 1
587
+
588
+ args.tokenizer = str(Path(args.tokenizer).resolve())
589
+ print(f"Using tokenizer path: {args.tokenizer}")
590
+
591
+ # Load models and extract metadata
592
+ metadata = {}
593
+ embed_model, ffn_models, metadata = load_models(args, metadata)
594
+
595
+ # Override context length from command line if provided
596
+ if args.context_length is not None:
597
+ metadata['context_length'] = args.context_length
598
+ metadata['state_length'] = args.context_length
599
+ print(f"\nOverriding context length from command line: {args.context_length}")
600
+
601
+ # Load tokenizer
602
+ tokenizer = initialize_tokenizer(args.tokenizer)
603
+ if tokenizer is None:
604
+ raise RuntimeError("Failed to initialize tokenizer")
605
+
606
+ # Determine number of tokens for the test
607
+ if args.test_full_context:
608
+ num_test_tokens = metadata['context_length']
609
+ print(f"\nTesting with full context length: {num_test_tokens} tokens")
610
+ elif args.test_tokens is not None:
611
+ num_test_tokens = args.test_tokens
612
+ print(f"\nTesting with specified tokens: {num_test_tokens} tokens")
613
+ else:
614
+ num_test_tokens = args.batch_size # Default to batch size
615
+ print(f"\nTesting with default tokens (batch size): {num_test_tokens} tokens")
616
+
617
+ # Ensure test tokens do not exceed context length
618
+ if num_test_tokens > metadata['context_length']:
619
+ print(f"\nWarning: Requested test tokens ({num_test_tokens}) exceed context length ({metadata['context_length']}).")
620
+ print(f"Clamping test tokens to context length.")
621
+ num_test_tokens = metadata['context_length']
622
+
623
+ # Run prefill speed test
624
+ test_prefill_speed(
625
+ embed_model=embed_model,
626
+ ffn_models=ffn_models,
627
+ tokenizer=tokenizer,
628
+ batch_size=args.batch_size, # Pass original batch_size for run_prefill internal logic
629
+ context_length=metadata['context_length'],
630
+ num_test_tokens=num_test_tokens, # Pass the number of tokens to actually test
631
+ num_runs=args.runs,
632
+ test_single_chunk=not args.no_single_chunk
633
+ )
634
+
635
+ except Exception as e:
636
+ print(f"\nError: {str(e)}")
637
+ import traceback
638
+ traceback.print_exc()
639
+ return 1
640
+
641
+ return 0
642
+
643
+ if __name__ == "__main__":
644
+ exit(main())
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b9e4e7fb171f92fd137b777cc2714bf87d11576700a1dcd7a399e7bbe39537b
3
+ size 17209920
tokenizer_config.json ADDED
@@ -0,0 +1,2063 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "128000": {
4
+ "content": "<|begin_of_text|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "128001": {
12
+ "content": "<|end_of_text|>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "128002": {
20
+ "content": "<|reserved_special_token_0|>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "128003": {
28
+ "content": "<|reserved_special_token_1|>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "128004": {
36
+ "content": "<|finetune_right_pad_id|>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "128005": {
44
+ "content": "<|reserved_special_token_2|>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "128006": {
52
+ "content": "<|start_header_id|>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "128007": {
60
+ "content": "<|end_header_id|>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "128008": {
68
+ "content": "<|eom_id|>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ },
75
+ "128009": {
76
+ "content": "<|eot_id|>",
77
+ "lstrip": false,
78
+ "normalized": false,
79
+ "rstrip": false,
80
+ "single_word": false,
81
+ "special": true
82
+ },
83
+ "128010": {
84
+ "content": "<|python_tag|>",
85
+ "lstrip": false,
86
+ "normalized": false,
87
+ "rstrip": false,
88
+ "single_word": false,
89
+ "special": true
90
+ },
91
+ "128011": {
92
+ "content": "<|reserved_special_token_3|>",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false,
97
+ "special": true
98
+ },
99
+ "128012": {
100
+ "content": "<|reserved_special_token_4|>",
101
+ "lstrip": false,
102
+ "normalized": false,
103
+ "rstrip": false,
104
+ "single_word": false,
105
+ "special": true
106
+ },
107
+ "128013": {
108
+ "content": "<|reserved_special_token_5|>",
109
+ "lstrip": false,
110
+ "normalized": false,
111
+ "rstrip": false,
112
+ "single_word": false,
113
+ "special": true
114
+ },
115
+ "128014": {
116
+ "content": "<|reserved_special_token_6|>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false,
121
+ "special": true
122
+ },
123
+ "128015": {
124
+ "content": "<|reserved_special_token_7|>",
125
+ "lstrip": false,
126
+ "normalized": false,
127
+ "rstrip": false,
128
+ "single_word": false,
129
+ "special": true
130
+ },
131
+ "128016": {
132
+ "content": "<|reserved_special_token_8|>",
133
+ "lstrip": false,
134
+ "normalized": false,
135
+ "rstrip": false,
136
+ "single_word": false,
137
+ "special": true
138
+ },
139
+ "128017": {
140
+ "content": "<|reserved_special_token_9|>",
141
+ "lstrip": false,
142
+ "normalized": false,
143
+ "rstrip": false,
144
+ "single_word": false,
145
+ "special": true
146
+ },
147
+ "128018": {
148
+ "content": "<|reserved_special_token_10|>",
149
+ "lstrip": false,
150
+ "normalized": false,
151
+ "rstrip": false,
152
+ "single_word": false,
153
+ "special": true
154
+ },
155
+ "128019": {
156
+ "content": "<|reserved_special_token_11|>",
157
+ "lstrip": false,
158
+ "normalized": false,
159
+ "rstrip": false,
160
+ "single_word": false,
161
+ "special": true
162
+ },
163
+ "128020": {
164
+ "content": "<|reserved_special_token_12|>",
165
+ "lstrip": false,
166
+ "normalized": false,
167
+ "rstrip": false,
168
+ "single_word": false,
169
+ "special": true
170
+ },
171
+ "128021": {
172
+ "content": "<|reserved_special_token_13|>",
173
+ "lstrip": false,
174
+ "normalized": false,
175
+ "rstrip": false,
176
+ "single_word": false,
177
+ "special": true
178
+ },
179
+ "128022": {
180
+ "content": "<|reserved_special_token_14|>",
181
+ "lstrip": false,
182
+ "normalized": false,
183
+ "rstrip": false,
184
+ "single_word": false,
185
+ "special": true
186
+ },
187
+ "128023": {
188
+ "content": "<|reserved_special_token_15|>",
189
+ "lstrip": false,
190
+ "normalized": false,
191
+ "rstrip": false,
192
+ "single_word": false,
193
+ "special": true
194
+ },
195
+ "128024": {
196
+ "content": "<|reserved_special_token_16|>",
197
+ "lstrip": false,
198
+ "normalized": false,
199
+ "rstrip": false,
200
+ "single_word": false,
201
+ "special": true
202
+ },
203
+ "128025": {
204
+ "content": "<|reserved_special_token_17|>",
205
+ "lstrip": false,
206
+ "normalized": false,
207
+ "rstrip": false,
208
+ "single_word": false,
209
+ "special": true
210
+ },
211
+ "128026": {
212
+ "content": "<|reserved_special_token_18|>",
213
+ "lstrip": false,
214
+ "normalized": false,
215
+ "rstrip": false,
216
+ "single_word": false,
217
+ "special": true
218
+ },
219
+ "128027": {
220
+ "content": "<|reserved_special_token_19|>",
221
+ "lstrip": false,
222
+ "normalized": false,
223
+ "rstrip": false,
224
+ "single_word": false,
225
+ "special": true
226
+ },
227
+ "128028": {
228
+ "content": "<|reserved_special_token_20|>",
229
+ "lstrip": false,
230
+ "normalized": false,
231
+ "rstrip": false,
232
+ "single_word": false,
233
+ "special": true
234
+ },
235
+ "128029": {
236
+ "content": "<|reserved_special_token_21|>",
237
+ "lstrip": false,
238
+ "normalized": false,
239
+ "rstrip": false,
240
+ "single_word": false,
241
+ "special": true
242
+ },
243
+ "128030": {
244
+ "content": "<|reserved_special_token_22|>",
245
+ "lstrip": false,
246
+ "normalized": false,
247
+ "rstrip": false,
248
+ "single_word": false,
249
+ "special": true
250
+ },
251
+ "128031": {
252
+ "content": "<|reserved_special_token_23|>",
253
+ "lstrip": false,
254
+ "normalized": false,
255
+ "rstrip": false,
256
+ "single_word": false,
257
+ "special": true
258
+ },
259
+ "128032": {
260
+ "content": "<|reserved_special_token_24|>",
261
+ "lstrip": false,
262
+ "normalized": false,
263
+ "rstrip": false,
264
+ "single_word": false,
265
+ "special": true
266
+ },
267
+ "128033": {
268
+ "content": "<|reserved_special_token_25|>",
269
+ "lstrip": false,
270
+ "normalized": false,
271
+ "rstrip": false,
272
+ "single_word": false,
273
+ "special": true
274
+ },
275
+ "128034": {
276
+ "content": "<|reserved_special_token_26|>",
277
+ "lstrip": false,
278
+ "normalized": false,
279
+ "rstrip": false,
280
+ "single_word": false,
281
+ "special": true
282
+ },
283
+ "128035": {
284
+ "content": "<|reserved_special_token_27|>",
285
+ "lstrip": false,
286
+ "normalized": false,
287
+ "rstrip": false,
288
+ "single_word": false,
289
+ "special": true
290
+ },
291
+ "128036": {
292
+ "content": "<|reserved_special_token_28|>",
293
+ "lstrip": false,
294
+ "normalized": false,
295
+ "rstrip": false,
296
+ "single_word": false,
297
+ "special": true
298
+ },
299
+ "128037": {
300
+ "content": "<|reserved_special_token_29|>",
301
+ "lstrip": false,
302
+ "normalized": false,
303
+ "rstrip": false,
304
+ "single_word": false,
305
+ "special": true
306
+ },
307
+ "128038": {
308
+ "content": "<|reserved_special_token_30|>",
309
+ "lstrip": false,
310
+ "normalized": false,
311
+ "rstrip": false,
312
+ "single_word": false,
313
+ "special": true
314
+ },
315
+ "128039": {
316
+ "content": "<|reserved_special_token_31|>",
317
+ "lstrip": false,
318
+ "normalized": false,
319
+ "rstrip": false,
320
+ "single_word": false,
321
+ "special": true
322
+ },
323
+ "128040": {
324
+ "content": "<|reserved_special_token_32|>",
325
+ "lstrip": false,
326
+ "normalized": false,
327
+ "rstrip": false,
328
+ "single_word": false,
329
+ "special": true
330
+ },
331
+ "128041": {
332
+ "content": "<|reserved_special_token_33|>",
333
+ "lstrip": false,
334
+ "normalized": false,
335
+ "rstrip": false,
336
+ "single_word": false,
337
+ "special": true
338
+ },
339
+ "128042": {
340
+ "content": "<|reserved_special_token_34|>",
341
+ "lstrip": false,
342
+ "normalized": false,
343
+ "rstrip": false,
344
+ "single_word": false,
345
+ "special": true
346
+ },
347
+ "128043": {
348
+ "content": "<|reserved_special_token_35|>",
349
+ "lstrip": false,
350
+ "normalized": false,
351
+ "rstrip": false,
352
+ "single_word": false,
353
+ "special": true
354
+ },
355
+ "128044": {
356
+ "content": "<|reserved_special_token_36|>",
357
+ "lstrip": false,
358
+ "normalized": false,
359
+ "rstrip": false,
360
+ "single_word": false,
361
+ "special": true
362
+ },
363
+ "128045": {
364
+ "content": "<|reserved_special_token_37|>",
365
+ "lstrip": false,
366
+ "normalized": false,
367
+ "rstrip": false,
368
+ "single_word": false,
369
+ "special": true
370
+ },
371
+ "128046": {
372
+ "content": "<|reserved_special_token_38|>",
373
+ "lstrip": false,
374
+ "normalized": false,
375
+ "rstrip": false,
376
+ "single_word": false,
377
+ "special": true
378
+ },
379
+ "128047": {
380
+ "content": "<|reserved_special_token_39|>",
381
+ "lstrip": false,
382
+ "normalized": false,
383
+ "rstrip": false,
384
+ "single_word": false,
385
+ "special": true
386
+ },
387
+ "128048": {
388
+ "content": "<|reserved_special_token_40|>",
389
+ "lstrip": false,
390
+ "normalized": false,
391
+ "rstrip": false,
392
+ "single_word": false,
393
+ "special": true
394
+ },
395
+ "128049": {
396
+ "content": "<|reserved_special_token_41|>",
397
+ "lstrip": false,
398
+ "normalized": false,
399
+ "rstrip": false,
400
+ "single_word": false,
401
+ "special": true
402
+ },
403
+ "128050": {
404
+ "content": "<|reserved_special_token_42|>",
405
+ "lstrip": false,
406
+ "normalized": false,
407
+ "rstrip": false,
408
+ "single_word": false,
409
+ "special": true
410
+ },
411
+ "128051": {
412
+ "content": "<|reserved_special_token_43|>",
413
+ "lstrip": false,
414
+ "normalized": false,
415
+ "rstrip": false,
416
+ "single_word": false,
417
+ "special": true
418
+ },
419
+ "128052": {
420
+ "content": "<|reserved_special_token_44|>",
421
+ "lstrip": false,
422
+ "normalized": false,
423
+ "rstrip": false,
424
+ "single_word": false,
425
+ "special": true
426
+ },
427
+ "128053": {
428
+ "content": "<|reserved_special_token_45|>",
429
+ "lstrip": false,
430
+ "normalized": false,
431
+ "rstrip": false,
432
+ "single_word": false,
433
+ "special": true
434
+ },
435
+ "128054": {
436
+ "content": "<|reserved_special_token_46|>",
437
+ "lstrip": false,
438
+ "normalized": false,
439
+ "rstrip": false,
440
+ "single_word": false,
441
+ "special": true
442
+ },
443
+ "128055": {
444
+ "content": "<|reserved_special_token_47|>",
445
+ "lstrip": false,
446
+ "normalized": false,
447
+ "rstrip": false,
448
+ "single_word": false,
449
+ "special": true
450
+ },
451
+ "128056": {
452
+ "content": "<|reserved_special_token_48|>",
453
+ "lstrip": false,
454
+ "normalized": false,
455
+ "rstrip": false,
456
+ "single_word": false,
457
+ "special": true
458
+ },
459
+ "128057": {
460
+ "content": "<|reserved_special_token_49|>",
461
+ "lstrip": false,
462
+ "normalized": false,
463
+ "rstrip": false,
464
+ "single_word": false,
465
+ "special": true
466
+ },
467
+ "128058": {
468
+ "content": "<|reserved_special_token_50|>",
469
+ "lstrip": false,
470
+ "normalized": false,
471
+ "rstrip": false,
472
+ "single_word": false,
473
+ "special": true
474
+ },
475
+ "128059": {
476
+ "content": "<|reserved_special_token_51|>",
477
+ "lstrip": false,
478
+ "normalized": false,
479
+ "rstrip": false,
480
+ "single_word": false,
481
+ "special": true
482
+ },
483
+ "128060": {
484
+ "content": "<|reserved_special_token_52|>",
485
+ "lstrip": false,
486
+ "normalized": false,
487
+ "rstrip": false,
488
+ "single_word": false,
489
+ "special": true
490
+ },
491
+ "128061": {
492
+ "content": "<|reserved_special_token_53|>",
493
+ "lstrip": false,
494
+ "normalized": false,
495
+ "rstrip": false,
496
+ "single_word": false,
497
+ "special": true
498
+ },
499
+ "128062": {
500
+ "content": "<|reserved_special_token_54|>",
501
+ "lstrip": false,
502
+ "normalized": false,
503
+ "rstrip": false,
504
+ "single_word": false,
505
+ "special": true
506
+ },
507
+ "128063": {
508
+ "content": "<|reserved_special_token_55|>",
509
+ "lstrip": false,
510
+ "normalized": false,
511
+ "rstrip": false,
512
+ "single_word": false,
513
+ "special": true
514
+ },
515
+ "128064": {
516
+ "content": "<|reserved_special_token_56|>",
517
+ "lstrip": false,
518
+ "normalized": false,
519
+ "rstrip": false,
520
+ "single_word": false,
521
+ "special": true
522
+ },
523
+ "128065": {
524
+ "content": "<|reserved_special_token_57|>",
525
+ "lstrip": false,
526
+ "normalized": false,
527
+ "rstrip": false,
528
+ "single_word": false,
529
+ "special": true
530
+ },
531
+ "128066": {
532
+ "content": "<|reserved_special_token_58|>",
533
+ "lstrip": false,
534
+ "normalized": false,
535
+ "rstrip": false,
536
+ "single_word": false,
537
+ "special": true
538
+ },
539
+ "128067": {
540
+ "content": "<|reserved_special_token_59|>",
541
+ "lstrip": false,
542
+ "normalized": false,
543
+ "rstrip": false,
544
+ "single_word": false,
545
+ "special": true
546
+ },
547
+ "128068": {
548
+ "content": "<|reserved_special_token_60|>",
549
+ "lstrip": false,
550
+ "normalized": false,
551
+ "rstrip": false,
552
+ "single_word": false,
553
+ "special": true
554
+ },
555
+ "128069": {
556
+ "content": "<|reserved_special_token_61|>",
557
+ "lstrip": false,
558
+ "normalized": false,
559
+ "rstrip": false,
560
+ "single_word": false,
561
+ "special": true
562
+ },
563
+ "128070": {
564
+ "content": "<|reserved_special_token_62|>",
565
+ "lstrip": false,
566
+ "normalized": false,
567
+ "rstrip": false,
568
+ "single_word": false,
569
+ "special": true
570
+ },
571
+ "128071": {
572
+ "content": "<|reserved_special_token_63|>",
573
+ "lstrip": false,
574
+ "normalized": false,
575
+ "rstrip": false,
576
+ "single_word": false,
577
+ "special": true
578
+ },
579
+ "128072": {
580
+ "content": "<|reserved_special_token_64|>",
581
+ "lstrip": false,
582
+ "normalized": false,
583
+ "rstrip": false,
584
+ "single_word": false,
585
+ "special": true
586
+ },
587
+ "128073": {
588
+ "content": "<|reserved_special_token_65|>",
589
+ "lstrip": false,
590
+ "normalized": false,
591
+ "rstrip": false,
592
+ "single_word": false,
593
+ "special": true
594
+ },
595
+ "128074": {
596
+ "content": "<|reserved_special_token_66|>",
597
+ "lstrip": false,
598
+ "normalized": false,
599
+ "rstrip": false,
600
+ "single_word": false,
601
+ "special": true
602
+ },
603
+ "128075": {
604
+ "content": "<|reserved_special_token_67|>",
605
+ "lstrip": false,
606
+ "normalized": false,
607
+ "rstrip": false,
608
+ "single_word": false,
609
+ "special": true
610
+ },
611
+ "128076": {
612
+ "content": "<|reserved_special_token_68|>",
613
+ "lstrip": false,
614
+ "normalized": false,
615
+ "rstrip": false,
616
+ "single_word": false,
617
+ "special": true
618
+ },
619
+ "128077": {
620
+ "content": "<|reserved_special_token_69|>",
621
+ "lstrip": false,
622
+ "normalized": false,
623
+ "rstrip": false,
624
+ "single_word": false,
625
+ "special": true
626
+ },
627
+ "128078": {
628
+ "content": "<|reserved_special_token_70|>",
629
+ "lstrip": false,
630
+ "normalized": false,
631
+ "rstrip": false,
632
+ "single_word": false,
633
+ "special": true
634
+ },
635
+ "128079": {
636
+ "content": "<|reserved_special_token_71|>",
637
+ "lstrip": false,
638
+ "normalized": false,
639
+ "rstrip": false,
640
+ "single_word": false,
641
+ "special": true
642
+ },
643
+ "128080": {
644
+ "content": "<|reserved_special_token_72|>",
645
+ "lstrip": false,
646
+ "normalized": false,
647
+ "rstrip": false,
648
+ "single_word": false,
649
+ "special": true
650
+ },
651
+ "128081": {
652
+ "content": "<|reserved_special_token_73|>",
653
+ "lstrip": false,
654
+ "normalized": false,
655
+ "rstrip": false,
656
+ "single_word": false,
657
+ "special": true
658
+ },
659
+ "128082": {
660
+ "content": "<|reserved_special_token_74|>",
661
+ "lstrip": false,
662
+ "normalized": false,
663
+ "rstrip": false,
664
+ "single_word": false,
665
+ "special": true
666
+ },
667
+ "128083": {
668
+ "content": "<|reserved_special_token_75|>",
669
+ "lstrip": false,
670
+ "normalized": false,
671
+ "rstrip": false,
672
+ "single_word": false,
673
+ "special": true
674
+ },
675
+ "128084": {
676
+ "content": "<|reserved_special_token_76|>",
677
+ "lstrip": false,
678
+ "normalized": false,
679
+ "rstrip": false,
680
+ "single_word": false,
681
+ "special": true
682
+ },
683
+ "128085": {
684
+ "content": "<|reserved_special_token_77|>",
685
+ "lstrip": false,
686
+ "normalized": false,
687
+ "rstrip": false,
688
+ "single_word": false,
689
+ "special": true
690
+ },
691
+ "128086": {
692
+ "content": "<|reserved_special_token_78|>",
693
+ "lstrip": false,
694
+ "normalized": false,
695
+ "rstrip": false,
696
+ "single_word": false,
697
+ "special": true
698
+ },
699
+ "128087": {
700
+ "content": "<|reserved_special_token_79|>",
701
+ "lstrip": false,
702
+ "normalized": false,
703
+ "rstrip": false,
704
+ "single_word": false,
705
+ "special": true
706
+ },
707
+ "128088": {
708
+ "content": "<|reserved_special_token_80|>",
709
+ "lstrip": false,
710
+ "normalized": false,
711
+ "rstrip": false,
712
+ "single_word": false,
713
+ "special": true
714
+ },
715
+ "128089": {
716
+ "content": "<|reserved_special_token_81|>",
717
+ "lstrip": false,
718
+ "normalized": false,
719
+ "rstrip": false,
720
+ "single_word": false,
721
+ "special": true
722
+ },
723
+ "128090": {
724
+ "content": "<|reserved_special_token_82|>",
725
+ "lstrip": false,
726
+ "normalized": false,
727
+ "rstrip": false,
728
+ "single_word": false,
729
+ "special": true
730
+ },
731
+ "128091": {
732
+ "content": "<|reserved_special_token_83|>",
733
+ "lstrip": false,
734
+ "normalized": false,
735
+ "rstrip": false,
736
+ "single_word": false,
737
+ "special": true
738
+ },
739
+ "128092": {
740
+ "content": "<|reserved_special_token_84|>",
741
+ "lstrip": false,
742
+ "normalized": false,
743
+ "rstrip": false,
744
+ "single_word": false,
745
+ "special": true
746
+ },
747
+ "128093": {
748
+ "content": "<|reserved_special_token_85|>",
749
+ "lstrip": false,
750
+ "normalized": false,
751
+ "rstrip": false,
752
+ "single_word": false,
753
+ "special": true
754
+ },
755
+ "128094": {
756
+ "content": "<|reserved_special_token_86|>",
757
+ "lstrip": false,
758
+ "normalized": false,
759
+ "rstrip": false,
760
+ "single_word": false,
761
+ "special": true
762
+ },
763
+ "128095": {
764
+ "content": "<|reserved_special_token_87|>",
765
+ "lstrip": false,
766
+ "normalized": false,
767
+ "rstrip": false,
768
+ "single_word": false,
769
+ "special": true
770
+ },
771
+ "128096": {
772
+ "content": "<|reserved_special_token_88|>",
773
+ "lstrip": false,
774
+ "normalized": false,
775
+ "rstrip": false,
776
+ "single_word": false,
777
+ "special": true
778
+ },
779
+ "128097": {
780
+ "content": "<|reserved_special_token_89|>",
781
+ "lstrip": false,
782
+ "normalized": false,
783
+ "rstrip": false,
784
+ "single_word": false,
785
+ "special": true
786
+ },
787
+ "128098": {
788
+ "content": "<|reserved_special_token_90|>",
789
+ "lstrip": false,
790
+ "normalized": false,
791
+ "rstrip": false,
792
+ "single_word": false,
793
+ "special": true
794
+ },
795
+ "128099": {
796
+ "content": "<|reserved_special_token_91|>",
797
+ "lstrip": false,
798
+ "normalized": false,
799
+ "rstrip": false,
800
+ "single_word": false,
801
+ "special": true
802
+ },
803
+ "128100": {
804
+ "content": "<|reserved_special_token_92|>",
805
+ "lstrip": false,
806
+ "normalized": false,
807
+ "rstrip": false,
808
+ "single_word": false,
809
+ "special": true
810
+ },
811
+ "128101": {
812
+ "content": "<|reserved_special_token_93|>",
813
+ "lstrip": false,
814
+ "normalized": false,
815
+ "rstrip": false,
816
+ "single_word": false,
817
+ "special": true
818
+ },
819
+ "128102": {
820
+ "content": "<|reserved_special_token_94|>",
821
+ "lstrip": false,
822
+ "normalized": false,
823
+ "rstrip": false,
824
+ "single_word": false,
825
+ "special": true
826
+ },
827
+ "128103": {
828
+ "content": "<|reserved_special_token_95|>",
829
+ "lstrip": false,
830
+ "normalized": false,
831
+ "rstrip": false,
832
+ "single_word": false,
833
+ "special": true
834
+ },
835
+ "128104": {
836
+ "content": "<|reserved_special_token_96|>",
837
+ "lstrip": false,
838
+ "normalized": false,
839
+ "rstrip": false,
840
+ "single_word": false,
841
+ "special": true
842
+ },
843
+ "128105": {
844
+ "content": "<|reserved_special_token_97|>",
845
+ "lstrip": false,
846
+ "normalized": false,
847
+ "rstrip": false,
848
+ "single_word": false,
849
+ "special": true
850
+ },
851
+ "128106": {
852
+ "content": "<|reserved_special_token_98|>",
853
+ "lstrip": false,
854
+ "normalized": false,
855
+ "rstrip": false,
856
+ "single_word": false,
857
+ "special": true
858
+ },
859
+ "128107": {
860
+ "content": "<|reserved_special_token_99|>",
861
+ "lstrip": false,
862
+ "normalized": false,
863
+ "rstrip": false,
864
+ "single_word": false,
865
+ "special": true
866
+ },
867
+ "128108": {
868
+ "content": "<|reserved_special_token_100|>",
869
+ "lstrip": false,
870
+ "normalized": false,
871
+ "rstrip": false,
872
+ "single_word": false,
873
+ "special": true
874
+ },
875
+ "128109": {
876
+ "content": "<|reserved_special_token_101|>",
877
+ "lstrip": false,
878
+ "normalized": false,
879
+ "rstrip": false,
880
+ "single_word": false,
881
+ "special": true
882
+ },
883
+ "128110": {
884
+ "content": "<|reserved_special_token_102|>",
885
+ "lstrip": false,
886
+ "normalized": false,
887
+ "rstrip": false,
888
+ "single_word": false,
889
+ "special": true
890
+ },
891
+ "128111": {
892
+ "content": "<|reserved_special_token_103|>",
893
+ "lstrip": false,
894
+ "normalized": false,
895
+ "rstrip": false,
896
+ "single_word": false,
897
+ "special": true
898
+ },
899
+ "128112": {
900
+ "content": "<|reserved_special_token_104|>",
901
+ "lstrip": false,
902
+ "normalized": false,
903
+ "rstrip": false,
904
+ "single_word": false,
905
+ "special": true
906
+ },
907
+ "128113": {
908
+ "content": "<|reserved_special_token_105|>",
909
+ "lstrip": false,
910
+ "normalized": false,
911
+ "rstrip": false,
912
+ "single_word": false,
913
+ "special": true
914
+ },
915
+ "128114": {
916
+ "content": "<|reserved_special_token_106|>",
917
+ "lstrip": false,
918
+ "normalized": false,
919
+ "rstrip": false,
920
+ "single_word": false,
921
+ "special": true
922
+ },
923
+ "128115": {
924
+ "content": "<|reserved_special_token_107|>",
925
+ "lstrip": false,
926
+ "normalized": false,
927
+ "rstrip": false,
928
+ "single_word": false,
929
+ "special": true
930
+ },
931
+ "128116": {
932
+ "content": "<|reserved_special_token_108|>",
933
+ "lstrip": false,
934
+ "normalized": false,
935
+ "rstrip": false,
936
+ "single_word": false,
937
+ "special": true
938
+ },
939
+ "128117": {
940
+ "content": "<|reserved_special_token_109|>",
941
+ "lstrip": false,
942
+ "normalized": false,
943
+ "rstrip": false,
944
+ "single_word": false,
945
+ "special": true
946
+ },
947
+ "128118": {
948
+ "content": "<|reserved_special_token_110|>",
949
+ "lstrip": false,
950
+ "normalized": false,
951
+ "rstrip": false,
952
+ "single_word": false,
953
+ "special": true
954
+ },
955
+ "128119": {
956
+ "content": "<|reserved_special_token_111|>",
957
+ "lstrip": false,
958
+ "normalized": false,
959
+ "rstrip": false,
960
+ "single_word": false,
961
+ "special": true
962
+ },
963
+ "128120": {
964
+ "content": "<|reserved_special_token_112|>",
965
+ "lstrip": false,
966
+ "normalized": false,
967
+ "rstrip": false,
968
+ "single_word": false,
969
+ "special": true
970
+ },
971
+ "128121": {
972
+ "content": "<|reserved_special_token_113|>",
973
+ "lstrip": false,
974
+ "normalized": false,
975
+ "rstrip": false,
976
+ "single_word": false,
977
+ "special": true
978
+ },
979
+ "128122": {
980
+ "content": "<|reserved_special_token_114|>",
981
+ "lstrip": false,
982
+ "normalized": false,
983
+ "rstrip": false,
984
+ "single_word": false,
985
+ "special": true
986
+ },
987
+ "128123": {
988
+ "content": "<|reserved_special_token_115|>",
989
+ "lstrip": false,
990
+ "normalized": false,
991
+ "rstrip": false,
992
+ "single_word": false,
993
+ "special": true
994
+ },
995
+ "128124": {
996
+ "content": "<|reserved_special_token_116|>",
997
+ "lstrip": false,
998
+ "normalized": false,
999
+ "rstrip": false,
1000
+ "single_word": false,
1001
+ "special": true
1002
+ },
1003
+ "128125": {
1004
+ "content": "<|reserved_special_token_117|>",
1005
+ "lstrip": false,
1006
+ "normalized": false,
1007
+ "rstrip": false,
1008
+ "single_word": false,
1009
+ "special": true
1010
+ },
1011
+ "128126": {
1012
+ "content": "<|reserved_special_token_118|>",
1013
+ "lstrip": false,
1014
+ "normalized": false,
1015
+ "rstrip": false,
1016
+ "single_word": false,
1017
+ "special": true
1018
+ },
1019
+ "128127": {
1020
+ "content": "<|reserved_special_token_119|>",
1021
+ "lstrip": false,
1022
+ "normalized": false,
1023
+ "rstrip": false,
1024
+ "single_word": false,
1025
+ "special": true
1026
+ },
1027
+ "128128": {
1028
+ "content": "<|reserved_special_token_120|>",
1029
+ "lstrip": false,
1030
+ "normalized": false,
1031
+ "rstrip": false,
1032
+ "single_word": false,
1033
+ "special": true
1034
+ },
1035
+ "128129": {
1036
+ "content": "<|reserved_special_token_121|>",
1037
+ "lstrip": false,
1038
+ "normalized": false,
1039
+ "rstrip": false,
1040
+ "single_word": false,
1041
+ "special": true
1042
+ },
1043
+ "128130": {
1044
+ "content": "<|reserved_special_token_122|>",
1045
+ "lstrip": false,
1046
+ "normalized": false,
1047
+ "rstrip": false,
1048
+ "single_word": false,
1049
+ "special": true
1050
+ },
1051
+ "128131": {
1052
+ "content": "<|reserved_special_token_123|>",
1053
+ "lstrip": false,
1054
+ "normalized": false,
1055
+ "rstrip": false,
1056
+ "single_word": false,
1057
+ "special": true
1058
+ },
1059
+ "128132": {
1060
+ "content": "<|reserved_special_token_124|>",
1061
+ "lstrip": false,
1062
+ "normalized": false,
1063
+ "rstrip": false,
1064
+ "single_word": false,
1065
+ "special": true
1066
+ },
1067
+ "128133": {
1068
+ "content": "<|reserved_special_token_125|>",
1069
+ "lstrip": false,
1070
+ "normalized": false,
1071
+ "rstrip": false,
1072
+ "single_word": false,
1073
+ "special": true
1074
+ },
1075
+ "128134": {
1076
+ "content": "<|reserved_special_token_126|>",
1077
+ "lstrip": false,
1078
+ "normalized": false,
1079
+ "rstrip": false,
1080
+ "single_word": false,
1081
+ "special": true
1082
+ },
1083
+ "128135": {
1084
+ "content": "<|reserved_special_token_127|>",
1085
+ "lstrip": false,
1086
+ "normalized": false,
1087
+ "rstrip": false,
1088
+ "single_word": false,
1089
+ "special": true
1090
+ },
1091
+ "128136": {
1092
+ "content": "<|reserved_special_token_128|>",
1093
+ "lstrip": false,
1094
+ "normalized": false,
1095
+ "rstrip": false,
1096
+ "single_word": false,
1097
+ "special": true
1098
+ },
1099
+ "128137": {
1100
+ "content": "<|reserved_special_token_129|>",
1101
+ "lstrip": false,
1102
+ "normalized": false,
1103
+ "rstrip": false,
1104
+ "single_word": false,
1105
+ "special": true
1106
+ },
1107
+ "128138": {
1108
+ "content": "<|reserved_special_token_130|>",
1109
+ "lstrip": false,
1110
+ "normalized": false,
1111
+ "rstrip": false,
1112
+ "single_word": false,
1113
+ "special": true
1114
+ },
1115
+ "128139": {
1116
+ "content": "<|reserved_special_token_131|>",
1117
+ "lstrip": false,
1118
+ "normalized": false,
1119
+ "rstrip": false,
1120
+ "single_word": false,
1121
+ "special": true
1122
+ },
1123
+ "128140": {
1124
+ "content": "<|reserved_special_token_132|>",
1125
+ "lstrip": false,
1126
+ "normalized": false,
1127
+ "rstrip": false,
1128
+ "single_word": false,
1129
+ "special": true
1130
+ },
1131
+ "128141": {
1132
+ "content": "<|reserved_special_token_133|>",
1133
+ "lstrip": false,
1134
+ "normalized": false,
1135
+ "rstrip": false,
1136
+ "single_word": false,
1137
+ "special": true
1138
+ },
1139
+ "128142": {
1140
+ "content": "<|reserved_special_token_134|>",
1141
+ "lstrip": false,
1142
+ "normalized": false,
1143
+ "rstrip": false,
1144
+ "single_word": false,
1145
+ "special": true
1146
+ },
1147
+ "128143": {
1148
+ "content": "<|reserved_special_token_135|>",
1149
+ "lstrip": false,
1150
+ "normalized": false,
1151
+ "rstrip": false,
1152
+ "single_word": false,
1153
+ "special": true
1154
+ },
1155
+ "128144": {
1156
+ "content": "<|reserved_special_token_136|>",
1157
+ "lstrip": false,
1158
+ "normalized": false,
1159
+ "rstrip": false,
1160
+ "single_word": false,
1161
+ "special": true
1162
+ },
1163
+ "128145": {
1164
+ "content": "<|reserved_special_token_137|>",
1165
+ "lstrip": false,
1166
+ "normalized": false,
1167
+ "rstrip": false,
1168
+ "single_word": false,
1169
+ "special": true
1170
+ },
1171
+ "128146": {
1172
+ "content": "<|reserved_special_token_138|>",
1173
+ "lstrip": false,
1174
+ "normalized": false,
1175
+ "rstrip": false,
1176
+ "single_word": false,
1177
+ "special": true
1178
+ },
1179
+ "128147": {
1180
+ "content": "<|reserved_special_token_139|>",
1181
+ "lstrip": false,
1182
+ "normalized": false,
1183
+ "rstrip": false,
1184
+ "single_word": false,
1185
+ "special": true
1186
+ },
1187
+ "128148": {
1188
+ "content": "<|reserved_special_token_140|>",
1189
+ "lstrip": false,
1190
+ "normalized": false,
1191
+ "rstrip": false,
1192
+ "single_word": false,
1193
+ "special": true
1194
+ },
1195
+ "128149": {
1196
+ "content": "<|reserved_special_token_141|>",
1197
+ "lstrip": false,
1198
+ "normalized": false,
1199
+ "rstrip": false,
1200
+ "single_word": false,
1201
+ "special": true
1202
+ },
1203
+ "128150": {
1204
+ "content": "<|reserved_special_token_142|>",
1205
+ "lstrip": false,
1206
+ "normalized": false,
1207
+ "rstrip": false,
1208
+ "single_word": false,
1209
+ "special": true
1210
+ },
1211
+ "128151": {
1212
+ "content": "<|reserved_special_token_143|>",
1213
+ "lstrip": false,
1214
+ "normalized": false,
1215
+ "rstrip": false,
1216
+ "single_word": false,
1217
+ "special": true
1218
+ },
1219
+ "128152": {
1220
+ "content": "<|reserved_special_token_144|>",
1221
+ "lstrip": false,
1222
+ "normalized": false,
1223
+ "rstrip": false,
1224
+ "single_word": false,
1225
+ "special": true
1226
+ },
1227
+ "128153": {
1228
+ "content": "<|reserved_special_token_145|>",
1229
+ "lstrip": false,
1230
+ "normalized": false,
1231
+ "rstrip": false,
1232
+ "single_word": false,
1233
+ "special": true
1234
+ },
1235
+ "128154": {
1236
+ "content": "<|reserved_special_token_146|>",
1237
+ "lstrip": false,
1238
+ "normalized": false,
1239
+ "rstrip": false,
1240
+ "single_word": false,
1241
+ "special": true
1242
+ },
1243
+ "128155": {
1244
+ "content": "<|reserved_special_token_147|>",
1245
+ "lstrip": false,
1246
+ "normalized": false,
1247
+ "rstrip": false,
1248
+ "single_word": false,
1249
+ "special": true
1250
+ },
1251
+ "128156": {
1252
+ "content": "<|reserved_special_token_148|>",
1253
+ "lstrip": false,
1254
+ "normalized": false,
1255
+ "rstrip": false,
1256
+ "single_word": false,
1257
+ "special": true
1258
+ },
1259
+ "128157": {
1260
+ "content": "<|reserved_special_token_149|>",
1261
+ "lstrip": false,
1262
+ "normalized": false,
1263
+ "rstrip": false,
1264
+ "single_word": false,
1265
+ "special": true
1266
+ },
1267
+ "128158": {
1268
+ "content": "<|reserved_special_token_150|>",
1269
+ "lstrip": false,
1270
+ "normalized": false,
1271
+ "rstrip": false,
1272
+ "single_word": false,
1273
+ "special": true
1274
+ },
1275
+ "128159": {
1276
+ "content": "<|reserved_special_token_151|>",
1277
+ "lstrip": false,
1278
+ "normalized": false,
1279
+ "rstrip": false,
1280
+ "single_word": false,
1281
+ "special": true
1282
+ },
1283
+ "128160": {
1284
+ "content": "<|reserved_special_token_152|>",
1285
+ "lstrip": false,
1286
+ "normalized": false,
1287
+ "rstrip": false,
1288
+ "single_word": false,
1289
+ "special": true
1290
+ },
1291
+ "128161": {
1292
+ "content": "<|reserved_special_token_153|>",
1293
+ "lstrip": false,
1294
+ "normalized": false,
1295
+ "rstrip": false,
1296
+ "single_word": false,
1297
+ "special": true
1298
+ },
1299
+ "128162": {
1300
+ "content": "<|reserved_special_token_154|>",
1301
+ "lstrip": false,
1302
+ "normalized": false,
1303
+ "rstrip": false,
1304
+ "single_word": false,
1305
+ "special": true
1306
+ },
1307
+ "128163": {
1308
+ "content": "<|reserved_special_token_155|>",
1309
+ "lstrip": false,
1310
+ "normalized": false,
1311
+ "rstrip": false,
1312
+ "single_word": false,
1313
+ "special": true
1314
+ },
1315
+ "128164": {
1316
+ "content": "<|reserved_special_token_156|>",
1317
+ "lstrip": false,
1318
+ "normalized": false,
1319
+ "rstrip": false,
1320
+ "single_word": false,
1321
+ "special": true
1322
+ },
1323
+ "128165": {
1324
+ "content": "<|reserved_special_token_157|>",
1325
+ "lstrip": false,
1326
+ "normalized": false,
1327
+ "rstrip": false,
1328
+ "single_word": false,
1329
+ "special": true
1330
+ },
1331
+ "128166": {
1332
+ "content": "<|reserved_special_token_158|>",
1333
+ "lstrip": false,
1334
+ "normalized": false,
1335
+ "rstrip": false,
1336
+ "single_word": false,
1337
+ "special": true
1338
+ },
1339
+ "128167": {
1340
+ "content": "<|reserved_special_token_159|>",
1341
+ "lstrip": false,
1342
+ "normalized": false,
1343
+ "rstrip": false,
1344
+ "single_word": false,
1345
+ "special": true
1346
+ },
1347
+ "128168": {
1348
+ "content": "<|reserved_special_token_160|>",
1349
+ "lstrip": false,
1350
+ "normalized": false,
1351
+ "rstrip": false,
1352
+ "single_word": false,
1353
+ "special": true
1354
+ },
1355
+ "128169": {
1356
+ "content": "<|reserved_special_token_161|>",
1357
+ "lstrip": false,
1358
+ "normalized": false,
1359
+ "rstrip": false,
1360
+ "single_word": false,
1361
+ "special": true
1362
+ },
1363
+ "128170": {
1364
+ "content": "<|reserved_special_token_162|>",
1365
+ "lstrip": false,
1366
+ "normalized": false,
1367
+ "rstrip": false,
1368
+ "single_word": false,
1369
+ "special": true
1370
+ },
1371
+ "128171": {
1372
+ "content": "<|reserved_special_token_163|>",
1373
+ "lstrip": false,
1374
+ "normalized": false,
1375
+ "rstrip": false,
1376
+ "single_word": false,
1377
+ "special": true
1378
+ },
1379
+ "128172": {
1380
+ "content": "<|reserved_special_token_164|>",
1381
+ "lstrip": false,
1382
+ "normalized": false,
1383
+ "rstrip": false,
1384
+ "single_word": false,
1385
+ "special": true
1386
+ },
1387
+ "128173": {
1388
+ "content": "<|reserved_special_token_165|>",
1389
+ "lstrip": false,
1390
+ "normalized": false,
1391
+ "rstrip": false,
1392
+ "single_word": false,
1393
+ "special": true
1394
+ },
1395
+ "128174": {
1396
+ "content": "<|reserved_special_token_166|>",
1397
+ "lstrip": false,
1398
+ "normalized": false,
1399
+ "rstrip": false,
1400
+ "single_word": false,
1401
+ "special": true
1402
+ },
1403
+ "128175": {
1404
+ "content": "<|reserved_special_token_167|>",
1405
+ "lstrip": false,
1406
+ "normalized": false,
1407
+ "rstrip": false,
1408
+ "single_word": false,
1409
+ "special": true
1410
+ },
1411
+ "128176": {
1412
+ "content": "<|reserved_special_token_168|>",
1413
+ "lstrip": false,
1414
+ "normalized": false,
1415
+ "rstrip": false,
1416
+ "single_word": false,
1417
+ "special": true
1418
+ },
1419
+ "128177": {
1420
+ "content": "<|reserved_special_token_169|>",
1421
+ "lstrip": false,
1422
+ "normalized": false,
1423
+ "rstrip": false,
1424
+ "single_word": false,
1425
+ "special": true
1426
+ },
1427
+ "128178": {
1428
+ "content": "<|reserved_special_token_170|>",
1429
+ "lstrip": false,
1430
+ "normalized": false,
1431
+ "rstrip": false,
1432
+ "single_word": false,
1433
+ "special": true
1434
+ },
1435
+ "128179": {
1436
+ "content": "<|reserved_special_token_171|>",
1437
+ "lstrip": false,
1438
+ "normalized": false,
1439
+ "rstrip": false,
1440
+ "single_word": false,
1441
+ "special": true
1442
+ },
1443
+ "128180": {
1444
+ "content": "<|reserved_special_token_172|>",
1445
+ "lstrip": false,
1446
+ "normalized": false,
1447
+ "rstrip": false,
1448
+ "single_word": false,
1449
+ "special": true
1450
+ },
1451
+ "128181": {
1452
+ "content": "<|reserved_special_token_173|>",
1453
+ "lstrip": false,
1454
+ "normalized": false,
1455
+ "rstrip": false,
1456
+ "single_word": false,
1457
+ "special": true
1458
+ },
1459
+ "128182": {
1460
+ "content": "<|reserved_special_token_174|>",
1461
+ "lstrip": false,
1462
+ "normalized": false,
1463
+ "rstrip": false,
1464
+ "single_word": false,
1465
+ "special": true
1466
+ },
1467
+ "128183": {
1468
+ "content": "<|reserved_special_token_175|>",
1469
+ "lstrip": false,
1470
+ "normalized": false,
1471
+ "rstrip": false,
1472
+ "single_word": false,
1473
+ "special": true
1474
+ },
1475
+ "128184": {
1476
+ "content": "<|reserved_special_token_176|>",
1477
+ "lstrip": false,
1478
+ "normalized": false,
1479
+ "rstrip": false,
1480
+ "single_word": false,
1481
+ "special": true
1482
+ },
1483
+ "128185": {
1484
+ "content": "<|reserved_special_token_177|>",
1485
+ "lstrip": false,
1486
+ "normalized": false,
1487
+ "rstrip": false,
1488
+ "single_word": false,
1489
+ "special": true
1490
+ },
1491
+ "128186": {
1492
+ "content": "<|reserved_special_token_178|>",
1493
+ "lstrip": false,
1494
+ "normalized": false,
1495
+ "rstrip": false,
1496
+ "single_word": false,
1497
+ "special": true
1498
+ },
1499
+ "128187": {
1500
+ "content": "<|reserved_special_token_179|>",
1501
+ "lstrip": false,
1502
+ "normalized": false,
1503
+ "rstrip": false,
1504
+ "single_word": false,
1505
+ "special": true
1506
+ },
1507
+ "128188": {
1508
+ "content": "<|reserved_special_token_180|>",
1509
+ "lstrip": false,
1510
+ "normalized": false,
1511
+ "rstrip": false,
1512
+ "single_word": false,
1513
+ "special": true
1514
+ },
1515
+ "128189": {
1516
+ "content": "<|reserved_special_token_181|>",
1517
+ "lstrip": false,
1518
+ "normalized": false,
1519
+ "rstrip": false,
1520
+ "single_word": false,
1521
+ "special": true
1522
+ },
1523
+ "128190": {
1524
+ "content": "<|reserved_special_token_182|>",
1525
+ "lstrip": false,
1526
+ "normalized": false,
1527
+ "rstrip": false,
1528
+ "single_word": false,
1529
+ "special": true
1530
+ },
1531
+ "128191": {
1532
+ "content": "<|reserved_special_token_183|>",
1533
+ "lstrip": false,
1534
+ "normalized": false,
1535
+ "rstrip": false,
1536
+ "single_word": false,
1537
+ "special": true
1538
+ },
1539
+ "128192": {
1540
+ "content": "<|reserved_special_token_184|>",
1541
+ "lstrip": false,
1542
+ "normalized": false,
1543
+ "rstrip": false,
1544
+ "single_word": false,
1545
+ "special": true
1546
+ },
1547
+ "128193": {
1548
+ "content": "<|reserved_special_token_185|>",
1549
+ "lstrip": false,
1550
+ "normalized": false,
1551
+ "rstrip": false,
1552
+ "single_word": false,
1553
+ "special": true
1554
+ },
1555
+ "128194": {
1556
+ "content": "<|reserved_special_token_186|>",
1557
+ "lstrip": false,
1558
+ "normalized": false,
1559
+ "rstrip": false,
1560
+ "single_word": false,
1561
+ "special": true
1562
+ },
1563
+ "128195": {
1564
+ "content": "<|reserved_special_token_187|>",
1565
+ "lstrip": false,
1566
+ "normalized": false,
1567
+ "rstrip": false,
1568
+ "single_word": false,
1569
+ "special": true
1570
+ },
1571
+ "128196": {
1572
+ "content": "<|reserved_special_token_188|>",
1573
+ "lstrip": false,
1574
+ "normalized": false,
1575
+ "rstrip": false,
1576
+ "single_word": false,
1577
+ "special": true
1578
+ },
1579
+ "128197": {
1580
+ "content": "<|reserved_special_token_189|>",
1581
+ "lstrip": false,
1582
+ "normalized": false,
1583
+ "rstrip": false,
1584
+ "single_word": false,
1585
+ "special": true
1586
+ },
1587
+ "128198": {
1588
+ "content": "<|reserved_special_token_190|>",
1589
+ "lstrip": false,
1590
+ "normalized": false,
1591
+ "rstrip": false,
1592
+ "single_word": false,
1593
+ "special": true
1594
+ },
1595
+ "128199": {
1596
+ "content": "<|reserved_special_token_191|>",
1597
+ "lstrip": false,
1598
+ "normalized": false,
1599
+ "rstrip": false,
1600
+ "single_word": false,
1601
+ "special": true
1602
+ },
1603
+ "128200": {
1604
+ "content": "<|reserved_special_token_192|>",
1605
+ "lstrip": false,
1606
+ "normalized": false,
1607
+ "rstrip": false,
1608
+ "single_word": false,
1609
+ "special": true
1610
+ },
1611
+ "128201": {
1612
+ "content": "<|reserved_special_token_193|>",
1613
+ "lstrip": false,
1614
+ "normalized": false,
1615
+ "rstrip": false,
1616
+ "single_word": false,
1617
+ "special": true
1618
+ },
1619
+ "128202": {
1620
+ "content": "<|reserved_special_token_194|>",
1621
+ "lstrip": false,
1622
+ "normalized": false,
1623
+ "rstrip": false,
1624
+ "single_word": false,
1625
+ "special": true
1626
+ },
1627
+ "128203": {
1628
+ "content": "<|reserved_special_token_195|>",
1629
+ "lstrip": false,
1630
+ "normalized": false,
1631
+ "rstrip": false,
1632
+ "single_word": false,
1633
+ "special": true
1634
+ },
1635
+ "128204": {
1636
+ "content": "<|reserved_special_token_196|>",
1637
+ "lstrip": false,
1638
+ "normalized": false,
1639
+ "rstrip": false,
1640
+ "single_word": false,
1641
+ "special": true
1642
+ },
1643
+ "128205": {
1644
+ "content": "<|reserved_special_token_197|>",
1645
+ "lstrip": false,
1646
+ "normalized": false,
1647
+ "rstrip": false,
1648
+ "single_word": false,
1649
+ "special": true
1650
+ },
1651
+ "128206": {
1652
+ "content": "<|reserved_special_token_198|>",
1653
+ "lstrip": false,
1654
+ "normalized": false,
1655
+ "rstrip": false,
1656
+ "single_word": false,
1657
+ "special": true
1658
+ },
1659
+ "128207": {
1660
+ "content": "<|reserved_special_token_199|>",
1661
+ "lstrip": false,
1662
+ "normalized": false,
1663
+ "rstrip": false,
1664
+ "single_word": false,
1665
+ "special": true
1666
+ },
1667
+ "128208": {
1668
+ "content": "<|reserved_special_token_200|>",
1669
+ "lstrip": false,
1670
+ "normalized": false,
1671
+ "rstrip": false,
1672
+ "single_word": false,
1673
+ "special": true
1674
+ },
1675
+ "128209": {
1676
+ "content": "<|reserved_special_token_201|>",
1677
+ "lstrip": false,
1678
+ "normalized": false,
1679
+ "rstrip": false,
1680
+ "single_word": false,
1681
+ "special": true
1682
+ },
1683
+ "128210": {
1684
+ "content": "<|reserved_special_token_202|>",
1685
+ "lstrip": false,
1686
+ "normalized": false,
1687
+ "rstrip": false,
1688
+ "single_word": false,
1689
+ "special": true
1690
+ },
1691
+ "128211": {
1692
+ "content": "<|reserved_special_token_203|>",
1693
+ "lstrip": false,
1694
+ "normalized": false,
1695
+ "rstrip": false,
1696
+ "single_word": false,
1697
+ "special": true
1698
+ },
1699
+ "128212": {
1700
+ "content": "<|reserved_special_token_204|>",
1701
+ "lstrip": false,
1702
+ "normalized": false,
1703
+ "rstrip": false,
1704
+ "single_word": false,
1705
+ "special": true
1706
+ },
1707
+ "128213": {
1708
+ "content": "<|reserved_special_token_205|>",
1709
+ "lstrip": false,
1710
+ "normalized": false,
1711
+ "rstrip": false,
1712
+ "single_word": false,
1713
+ "special": true
1714
+ },
1715
+ "128214": {
1716
+ "content": "<|reserved_special_token_206|>",
1717
+ "lstrip": false,
1718
+ "normalized": false,
1719
+ "rstrip": false,
1720
+ "single_word": false,
1721
+ "special": true
1722
+ },
1723
+ "128215": {
1724
+ "content": "<|reserved_special_token_207|>",
1725
+ "lstrip": false,
1726
+ "normalized": false,
1727
+ "rstrip": false,
1728
+ "single_word": false,
1729
+ "special": true
1730
+ },
1731
+ "128216": {
1732
+ "content": "<|reserved_special_token_208|>",
1733
+ "lstrip": false,
1734
+ "normalized": false,
1735
+ "rstrip": false,
1736
+ "single_word": false,
1737
+ "special": true
1738
+ },
1739
+ "128217": {
1740
+ "content": "<|reserved_special_token_209|>",
1741
+ "lstrip": false,
1742
+ "normalized": false,
1743
+ "rstrip": false,
1744
+ "single_word": false,
1745
+ "special": true
1746
+ },
1747
+ "128218": {
1748
+ "content": "<|reserved_special_token_210|>",
1749
+ "lstrip": false,
1750
+ "normalized": false,
1751
+ "rstrip": false,
1752
+ "single_word": false,
1753
+ "special": true
1754
+ },
1755
+ "128219": {
1756
+ "content": "<|reserved_special_token_211|>",
1757
+ "lstrip": false,
1758
+ "normalized": false,
1759
+ "rstrip": false,
1760
+ "single_word": false,
1761
+ "special": true
1762
+ },
1763
+ "128220": {
1764
+ "content": "<|reserved_special_token_212|>",
1765
+ "lstrip": false,
1766
+ "normalized": false,
1767
+ "rstrip": false,
1768
+ "single_word": false,
1769
+ "special": true
1770
+ },
1771
+ "128221": {
1772
+ "content": "<|reserved_special_token_213|>",
1773
+ "lstrip": false,
1774
+ "normalized": false,
1775
+ "rstrip": false,
1776
+ "single_word": false,
1777
+ "special": true
1778
+ },
1779
+ "128222": {
1780
+ "content": "<|reserved_special_token_214|>",
1781
+ "lstrip": false,
1782
+ "normalized": false,
1783
+ "rstrip": false,
1784
+ "single_word": false,
1785
+ "special": true
1786
+ },
1787
+ "128223": {
1788
+ "content": "<|reserved_special_token_215|>",
1789
+ "lstrip": false,
1790
+ "normalized": false,
1791
+ "rstrip": false,
1792
+ "single_word": false,
1793
+ "special": true
1794
+ },
1795
+ "128224": {
1796
+ "content": "<|reserved_special_token_216|>",
1797
+ "lstrip": false,
1798
+ "normalized": false,
1799
+ "rstrip": false,
1800
+ "single_word": false,
1801
+ "special": true
1802
+ },
1803
+ "128225": {
1804
+ "content": "<|reserved_special_token_217|>",
1805
+ "lstrip": false,
1806
+ "normalized": false,
1807
+ "rstrip": false,
1808
+ "single_word": false,
1809
+ "special": true
1810
+ },
1811
+ "128226": {
1812
+ "content": "<|reserved_special_token_218|>",
1813
+ "lstrip": false,
1814
+ "normalized": false,
1815
+ "rstrip": false,
1816
+ "single_word": false,
1817
+ "special": true
1818
+ },
1819
+ "128227": {
1820
+ "content": "<|reserved_special_token_219|>",
1821
+ "lstrip": false,
1822
+ "normalized": false,
1823
+ "rstrip": false,
1824
+ "single_word": false,
1825
+ "special": true
1826
+ },
1827
+ "128228": {
1828
+ "content": "<|reserved_special_token_220|>",
1829
+ "lstrip": false,
1830
+ "normalized": false,
1831
+ "rstrip": false,
1832
+ "single_word": false,
1833
+ "special": true
1834
+ },
1835
+ "128229": {
1836
+ "content": "<|reserved_special_token_221|>",
1837
+ "lstrip": false,
1838
+ "normalized": false,
1839
+ "rstrip": false,
1840
+ "single_word": false,
1841
+ "special": true
1842
+ },
1843
+ "128230": {
1844
+ "content": "<|reserved_special_token_222|>",
1845
+ "lstrip": false,
1846
+ "normalized": false,
1847
+ "rstrip": false,
1848
+ "single_word": false,
1849
+ "special": true
1850
+ },
1851
+ "128231": {
1852
+ "content": "<|reserved_special_token_223|>",
1853
+ "lstrip": false,
1854
+ "normalized": false,
1855
+ "rstrip": false,
1856
+ "single_word": false,
1857
+ "special": true
1858
+ },
1859
+ "128232": {
1860
+ "content": "<|reserved_special_token_224|>",
1861
+ "lstrip": false,
1862
+ "normalized": false,
1863
+ "rstrip": false,
1864
+ "single_word": false,
1865
+ "special": true
1866
+ },
1867
+ "128233": {
1868
+ "content": "<|reserved_special_token_225|>",
1869
+ "lstrip": false,
1870
+ "normalized": false,
1871
+ "rstrip": false,
1872
+ "single_word": false,
1873
+ "special": true
1874
+ },
1875
+ "128234": {
1876
+ "content": "<|reserved_special_token_226|>",
1877
+ "lstrip": false,
1878
+ "normalized": false,
1879
+ "rstrip": false,
1880
+ "single_word": false,
1881
+ "special": true
1882
+ },
1883
+ "128235": {
1884
+ "content": "<|reserved_special_token_227|>",
1885
+ "lstrip": false,
1886
+ "normalized": false,
1887
+ "rstrip": false,
1888
+ "single_word": false,
1889
+ "special": true
1890
+ },
1891
+ "128236": {
1892
+ "content": "<|reserved_special_token_228|>",
1893
+ "lstrip": false,
1894
+ "normalized": false,
1895
+ "rstrip": false,
1896
+ "single_word": false,
1897
+ "special": true
1898
+ },
1899
+ "128237": {
1900
+ "content": "<|reserved_special_token_229|>",
1901
+ "lstrip": false,
1902
+ "normalized": false,
1903
+ "rstrip": false,
1904
+ "single_word": false,
1905
+ "special": true
1906
+ },
1907
+ "128238": {
1908
+ "content": "<|reserved_special_token_230|>",
1909
+ "lstrip": false,
1910
+ "normalized": false,
1911
+ "rstrip": false,
1912
+ "single_word": false,
1913
+ "special": true
1914
+ },
1915
+ "128239": {
1916
+ "content": "<|reserved_special_token_231|>",
1917
+ "lstrip": false,
1918
+ "normalized": false,
1919
+ "rstrip": false,
1920
+ "single_word": false,
1921
+ "special": true
1922
+ },
1923
+ "128240": {
1924
+ "content": "<|reserved_special_token_232|>",
1925
+ "lstrip": false,
1926
+ "normalized": false,
1927
+ "rstrip": false,
1928
+ "single_word": false,
1929
+ "special": true
1930
+ },
1931
+ "128241": {
1932
+ "content": "<|reserved_special_token_233|>",
1933
+ "lstrip": false,
1934
+ "normalized": false,
1935
+ "rstrip": false,
1936
+ "single_word": false,
1937
+ "special": true
1938
+ },
1939
+ "128242": {
1940
+ "content": "<|reserved_special_token_234|>",
1941
+ "lstrip": false,
1942
+ "normalized": false,
1943
+ "rstrip": false,
1944
+ "single_word": false,
1945
+ "special": true
1946
+ },
1947
+ "128243": {
1948
+ "content": "<|reserved_special_token_235|>",
1949
+ "lstrip": false,
1950
+ "normalized": false,
1951
+ "rstrip": false,
1952
+ "single_word": false,
1953
+ "special": true
1954
+ },
1955
+ "128244": {
1956
+ "content": "<|reserved_special_token_236|>",
1957
+ "lstrip": false,
1958
+ "normalized": false,
1959
+ "rstrip": false,
1960
+ "single_word": false,
1961
+ "special": true
1962
+ },
1963
+ "128245": {
1964
+ "content": "<|reserved_special_token_237|>",
1965
+ "lstrip": false,
1966
+ "normalized": false,
1967
+ "rstrip": false,
1968
+ "single_word": false,
1969
+ "special": true
1970
+ },
1971
+ "128246": {
1972
+ "content": "<|reserved_special_token_238|>",
1973
+ "lstrip": false,
1974
+ "normalized": false,
1975
+ "rstrip": false,
1976
+ "single_word": false,
1977
+ "special": true
1978
+ },
1979
+ "128247": {
1980
+ "content": "<|reserved_special_token_239|>",
1981
+ "lstrip": false,
1982
+ "normalized": false,
1983
+ "rstrip": false,
1984
+ "single_word": false,
1985
+ "special": true
1986
+ },
1987
+ "128248": {
1988
+ "content": "<|reserved_special_token_240|>",
1989
+ "lstrip": false,
1990
+ "normalized": false,
1991
+ "rstrip": false,
1992
+ "single_word": false,
1993
+ "special": true
1994
+ },
1995
+ "128249": {
1996
+ "content": "<|reserved_special_token_241|>",
1997
+ "lstrip": false,
1998
+ "normalized": false,
1999
+ "rstrip": false,
2000
+ "single_word": false,
2001
+ "special": true
2002
+ },
2003
+ "128250": {
2004
+ "content": "<|reserved_special_token_242|>",
2005
+ "lstrip": false,
2006
+ "normalized": false,
2007
+ "rstrip": false,
2008
+ "single_word": false,
2009
+ "special": true
2010
+ },
2011
+ "128251": {
2012
+ "content": "<|reserved_special_token_243|>",
2013
+ "lstrip": false,
2014
+ "normalized": false,
2015
+ "rstrip": false,
2016
+ "single_word": false,
2017
+ "special": true
2018
+ },
2019
+ "128252": {
2020
+ "content": "<|reserved_special_token_244|>",
2021
+ "lstrip": false,
2022
+ "normalized": false,
2023
+ "rstrip": false,
2024
+ "single_word": false,
2025
+ "special": true
2026
+ },
2027
+ "128253": {
2028
+ "content": "<|reserved_special_token_245|>",
2029
+ "lstrip": false,
2030
+ "normalized": false,
2031
+ "rstrip": false,
2032
+ "single_word": false,
2033
+ "special": true
2034
+ },
2035
+ "128254": {
2036
+ "content": "<|reserved_special_token_246|>",
2037
+ "lstrip": false,
2038
+ "normalized": false,
2039
+ "rstrip": false,
2040
+ "single_word": false,
2041
+ "special": true
2042
+ },
2043
+ "128255": {
2044
+ "content": "<|reserved_special_token_247|>",
2045
+ "lstrip": false,
2046
+ "normalized": false,
2047
+ "rstrip": false,
2048
+ "single_word": false,
2049
+ "special": true
2050
+ }
2051
+ },
2052
+ "bos_token": "<|begin_of_text|>",
2053
+ "chat_template": "{%- if messages[0]['role'] == 'system' -%}{%- set system_message = messages[0]['content'] | trim -%}{%- set messages = messages[1:] -%}{%- else -%}{%- set system_message = '' -%}{%- endif -%}{%- if tools is not none -%}{{- '<|begin_of_text|><|start_header_id|>system<|end_header_id|>' + '\n\n' + system_message -}} {{- '\n\n' if system_message else '' -}} {{- '<AVAILABLE_TOOLS>[' -}} {% for t in tools %}{{- (t.function if t.function is defined else t) | tojson() -}}{{- ', ' if not loop.last else '' -}}{%- endfor -%} {{- ']</AVAILABLE_TOOLS>' -}} {{- '<|eot_id|>' -}}{%- else -%}{{- '<|begin_of_text|><|start_header_id|>system<|end_header_id|>' + '\n\n' + system_message + '<|eot_id|>' -}}{%- endif -%}{%- for message in messages -%}{%- if (message['role'] in ['user', 'tool']) != (loop.index0 % 2 == 0) -%}{{- raise_exception('Conversation roles must alternate between user/tool and assistant') -}}{%- elif message['role'] == 'user' -%}{{- '<|start_header_id|>user<|end_header_id|>' + '\n\n' + message['content'] | trim + '<|eot_id|>' -}}{%- elif message['role'] == 'tool' -%}{%- set tool_response = '<TOOL_RESPONSE>[' + message['content'] | trim + ']</TOOL_RESPONSE>' -%}{{- '<|start_header_id|>user<|end_header_id|>' + '\n\n' + tool_response + '<|eot_id|>' -}}{%- elif message['role'] == 'assistant' and message.get('tool_calls') is not none -%}{%- set tool_calls = message['tool_calls'] -%}{{- '<|start_header_id|>assistant<|end_header_id|>' + '\n\n' + '<TOOLCALL>[' -}}{%- for tool_call in tool_calls -%}{{ '{' + '\"name\": \"' + tool_call.function.name + '\", \"arguments\": ' + tool_call.function.arguments | tojson + '}' }}{%- if not loop.last -%}{{ ', ' }}{%- else -%}{{ ']</TOOLCALL>' + '<|eot_id|>' }}{%- endif -%}{%- endfor -%}{%- elif message['role'] == 'assistant' -%}{{- '<|start_header_id|>assistant<|end_header_id|>' + '\n\n' + message['content'] | trim + '<|eot_id|>' -}}{%- endif -%}{%- endfor -%}{%- if add_generation_prompt -%}{{ '<|start_header_id|>assistant<|end_header_id|>' + '\n\n' }}{%- endif -%}",
2054
+ "clean_up_tokenization_spaces": true,
2055
+ "eos_token": "<|eot_id|>",
2056
+ "extra_special_tokens": {},
2057
+ "model_input_names": [
2058
+ "input_ids",
2059
+ "attention_mask"
2060
+ ],
2061
+ "model_max_length": 131072,
2062
+ "tokenizer_class": "PreTrainedTokenizerFast"
2063
+ }