vijusudhi commited on
Commit
73ff8a1
·
verified ·
1 Parent(s): 402567c

Upload tokenizer

Browse files
Files changed (5) hide show
  1. README.md +199 -0
  2. gptx_tokenizer.py +432 -0
  3. special_tokens_map.json +268 -0
  4. tokenizer.model +3 -0
  5. tokenizer_config.json +292 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
gptx_tokenizer.py ADDED
@@ -0,0 +1,432 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import os
5
+ import warnings
6
+ from pathlib import Path
7
+ from typing import Any, Dict, List, Mapping, Optional, Tuple, Union
8
+
9
+ import sentencepiece as spm
10
+ import numpy as np
11
+ import torch
12
+ from huggingface_hub import hf_hub_download, list_repo_files, try_to_load_from_cache
13
+ from transformers.tokenization_utils import PreTrainedTokenizer
14
+ from transformers.tokenization_utils_base import TOKENIZER_CONFIG_FILE
15
+
16
+
17
+ REPO_ID = "lamarr-llm-development/elbedding-v2-AutoGPTQ-4bit"
18
+
19
+
20
+ class HFGPTXTokenizer(PreTrainedTokenizer):
21
+ """
22
+ A custom tokenizer class that extends Hugging Face's PreTrainedTokenizer.
23
+ It is specifically designed to work with SentencePiece models and integrates
24
+ with Hugging Face's tokenizer utilities.
25
+ """
26
+
27
+ model_file_glob = "*tokenizer.json"
28
+ vocab_files_names = {"tokenizer_file": "tokenizer.json"}
29
+ decode_kwargs: List[str] = []
30
+
31
+ def _encode(self, text: str, return_tokens: bool = False, is_continuation: bool = False):
32
+ """
33
+ Encode a given text using the tokenizer.
34
+
35
+ Args:
36
+ text (str): The text to encode.
37
+ return_tokens (bool): If True, returns token strings instead of token IDs.
38
+ is_continuation (bool): If True, uses a continuation tokenizer (if available).
39
+ Returns:
40
+ List[int] or List[str]: Encoded text as a list of token IDs or token strings.
41
+ """
42
+ assert self.tok is not None, "No tokenizer is currently loaded"
43
+
44
+ # Variant with additional sp processor:
45
+ tokenizer = self.continuation_tokenizer if is_continuation else self.tok
46
+
47
+ if return_tokens:
48
+ return tokenizer.encode_as_pieces(text)
49
+ else:
50
+ return tokenizer.encode(text)
51
+
52
+ def create_list_of_special_tokens(self) -> List[str]:
53
+ """
54
+ Create a list of special tokens, including the BOS, EOS, PAD, EOD tokens,
55
+ and 256 additional placeholder tokens.
56
+ Returns:
57
+ List[str]: List of special tokens.
58
+ """
59
+ return [self.bos_token, self.eos_token, self.pad_token, self.eod_token] + [
60
+ f"<placeholder_tok_{i}>" for i in range(256)
61
+ ]
62
+
63
+ def find_tokenizer_config(self, config_path: Path, repo_id: str = None) -> Optional[Path]:
64
+ if not os.path.isfile(config_path):
65
+ config_path = try_to_load_from_cache(repo_id=repo_id, filename=Path(config_path).name)
66
+ if not config_path:
67
+ config_path = self._download_config_from_hub(repo_id=repo_id)
68
+
69
+ return config_path
70
+
71
+
72
+ def instantiate_from_file_or_name(self, model_file_or_name: str, repo_id: str = None):
73
+ """
74
+ Load the tokenizer model from a file or download it from a repository.
75
+
76
+ Args:
77
+ model_file_or_name (str): Path to the model file or the model name.
78
+ repo_id (str, optional): Repository ID from which to download the model file.
79
+
80
+ Returns:
81
+ spm.SentencePieceProcessor: Loaded SentencePieceProcessor instance.
82
+
83
+ Raises:
84
+ ValueError: If repo_id is not provided when model_file_or_name is not a file.
85
+ OSError: If the model file cannot be loaded or downloaded.
86
+ """
87
+ if not os.path.isfile(model_file_or_name):
88
+ model_file_or_name = try_to_load_from_cache(repo_id=repo_id, filename=Path(model_file_or_name).name)
89
+ if not model_file_or_name:
90
+ model_file_or_name = self._download_model_from_hub(repo_id=repo_id)
91
+
92
+ try:
93
+ return spm.SentencePieceProcessor(model_file=model_file_or_name)
94
+ except Exception as e:
95
+ raise OSError(f"Failed to load tokenizer model: {str(e)}")
96
+
97
+ def _download_model_from_hub(self, repo_id: str) -> Optional[str]:
98
+ try:
99
+ # List all files in the repo
100
+ repo_files = list_repo_files(repo_id)
101
+
102
+ # Find the tokenizer model file
103
+ tokenizer_files = [f for f in repo_files if f.endswith('.model')]
104
+ if not tokenizer_files:
105
+ raise FileNotFoundError(f"No .model file found in repository {repo_id}")
106
+
107
+ # Use the first .model file found
108
+ model_file = tokenizer_files[0]
109
+ print(f"Found tokenizer model file: {model_file}")
110
+
111
+ # Download the file
112
+ model_file_or_name = hf_hub_download(repo_id=repo_id, filename=model_file)
113
+ print(f"Downloaded tokenizer model to: {model_file_or_name}")
114
+ except Exception as e:
115
+ raise OSError(f"Failed to download tokenizer model: {str(e)}")
116
+
117
+ return model_file_or_name
118
+
119
+ def _download_config_from_hub(self, repo_id: str):
120
+ if repo_id is None:
121
+ raise ValueError("repo_id must be provided if config_path is not a local file")
122
+
123
+ try:
124
+ # List all files in the repo
125
+ repo_files = list_repo_files(repo_id)
126
+
127
+ # Find the tokenizer config file
128
+ tokenizer_files = [f for f in repo_files if f.endswith('tokenizer_config.json')]
129
+ if not tokenizer_files:
130
+ raise FileNotFoundError(f"No tokenizer_config.json file found in repository {repo_id}")
131
+
132
+ # Use the first tokenizer_config.json file found
133
+ tokenizer_config_file = tokenizer_files[0]
134
+ print(f"Found tokenizer config file: {tokenizer_config_file}")
135
+
136
+ # Download the file
137
+ tokenizer_config_file_or_name = hf_hub_download(repo_id=repo_id, filename=tokenizer_config_file)
138
+ print(f"Downloaded tokenizer config file to: {tokenizer_config_file_or_name}")
139
+ return tokenizer_config_file_or_name
140
+ except Exception as e:
141
+ raise OSError(f"Failed to download tokenizer model: {str(e)}")
142
+ def __init__(
143
+ self,
144
+ model_path: Optional[str] = None,
145
+ config_path: Optional[str] = None,
146
+ **kwargs: Any,
147
+ ) -> None:
148
+ """
149
+ Initialize the tokenizer.
150
+ Args:
151
+ model_path (Optional[str]): Path to the tokenizer model file.
152
+ config_path (Optional[str]): Path to the tokenizer configuration file.
153
+ **kwargs: Additional keyword arguments passed to the superclass.
154
+ This method also ensures backward compatibility by setting
155
+ `clean_up_tokenization_spaces` to False by default.
156
+ """
157
+ # Prevent cleanup of tokenization spaces to maintain backward compatibility
158
+ self.clean_up_tokenization_spaces = kwargs.setdefault("clean_up_tokenization_spaces", False)
159
+ self.vocab = None
160
+ cp_path = kwargs.get("name_or_path", ".")
161
+ if model_path is None:
162
+ model_path = str(Path(cp_path) / self.vocab_files_names["tokenizer_file"])
163
+ self.tok = self.instantiate_from_file_or_name(model_path, repo_id=REPO_ID)
164
+
165
+ super().__init__(**kwargs)
166
+
167
+ # Specify special tokens which we know the value of.
168
+ # EOD from `tok` is used as what is called EOS in HuggingFace.
169
+ # Since there is no corresponding mapping for EOS from `tok` in
170
+ # HuggingFace, it is treated as an additional special token.
171
+ # Same for all other special tokens.
172
+
173
+
174
+ self.unk_token = "<unk>"
175
+ self.eos_token = "</s>"
176
+ self.bos_token = "<s>"
177
+ self.pad_token = "<pad>"
178
+ self.eod_token = "<eod>"
179
+
180
+ self.additional_special_tokens = self.create_list_of_special_tokens()
181
+
182
+ if config_path is None:
183
+ config_path = str(Path(cp_path) / TOKENIZER_CONFIG_FILE)
184
+
185
+ if os.path.isfile(config_path):
186
+ self.tokenizer_config = self.load_json(Path(config_path))
187
+ else: # Load from repo
188
+ self.tokenizer_config = self.load_json(Path(self.find_tokenizer_config(Path(config_path), repo_id=REPO_ID)))
189
+
190
+ @property
191
+ def vocab_size(self) -> int:
192
+ """
193
+ Get the size of the tokenizer vocabulary.
194
+ Returns:
195
+ int: The size of the vocabulary.
196
+ """
197
+ return self.tok.GetPieceSize()
198
+
199
+ def get_vocab(self) -> Dict[str, int]:
200
+ """
201
+ Get the vocabulary as a dictionary mapping token strings to their IDs.
202
+ Returns:
203
+ Dict[str, int]: Vocabulary mapping.
204
+ """
205
+ if self.vocab is None:
206
+ self.vocab = {self.tok.IdToPiece(i): i for i in range(self.vocab_size)}
207
+ return self.vocab
208
+
209
+ def _tokenize(self, text: str, **kwargs) -> List[int]:
210
+ """
211
+ Tokenize the input text.
212
+ Args:
213
+ text (str): Text to tokenize.
214
+ **kwargs: Additional keyword arguments.
215
+ Returns:
216
+ List[int]: List of token IDs.
217
+ """
218
+ return_tokens = kwargs.pop("return_tokens", True)
219
+ return self._encode(text, return_tokens=return_tokens, **kwargs)
220
+
221
+ def _convert_token_to_id(self, token: str) -> int:
222
+ """
223
+ Convert a token string to its corresponding ID.
224
+ Args:
225
+ token (str): The token to convert.
226
+ Returns:
227
+ int: The token's ID.
228
+ Raises:
229
+ ValueError: If the token is unknown and cannot be encoded to a single ID.
230
+ """
231
+ return self.tok.PieceToId(token)
232
+
233
+
234
+ def decode(
235
+ self,
236
+ token_ids: Union[List[int], List[List[int]]],
237
+ num_threads: Optional[int] = None,
238
+ skip_special_tokens: bool = False,
239
+ clean_up_tokenization_spaces: bool = False,
240
+ ) -> str:
241
+ """
242
+ Decode a list of token IDs into a string.
243
+ Args:
244
+ token_ids (Union[List[int], List[List[int]]]): List of token IDs or lists of token IDs.
245
+ num_threads (Optional[int]): Number of threads to use for decoding.
246
+ Returns:
247
+ str: Decoded string.
248
+ """
249
+ if isinstance(token_ids, torch.Tensor): # For PyTorch tensors
250
+ token_ids = token_ids.tolist()
251
+ elif isinstance(token_ids, np.ndarray): # For NumPy arrays
252
+ token_ids = token_ids.tolist()
253
+
254
+ output = self.tok.decode(input=token_ids, num_threads=num_threads)
255
+ if skip_special_tokens:
256
+ for substring in self.additional_special_tokens:
257
+ output = output.replace(substring, "")
258
+
259
+ if clean_up_tokenization_spaces:
260
+ warnings.warn(
261
+ "when cleaning up tokenization spaces, this will not behave "
262
+ "like the original `GPTXTokenizer`., Please supply "
263
+ "`clean_up_tokenization_spaces=False` for decoding."
264
+ )
265
+ output = self.clean_up_tokenization(output)
266
+
267
+ return output
268
+
269
+
270
+ def _convert_id_to_token(self, index: int) -> str:
271
+ """
272
+ Convert a token ID to its corresponding token string.
273
+ Args:
274
+ index (int): Token ID.
275
+ Returns:
276
+ str: Corresponding token string.
277
+ """
278
+ return self.tok.IdToPiece(index)
279
+
280
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
281
+ """
282
+ Convert a list of tokens into a single string.
283
+ Args:
284
+ tokens (List[str]): List of token strings.
285
+ Returns:
286
+ str: Concatenated string of tokens.
287
+ """
288
+ return self.tok.DecodePieces(tokens)
289
+
290
+ def _tok_decode(self, token_ids: List[int], **kwargs: Any) -> str:
291
+ """
292
+ Internal method to decode token IDs with additional arguments.
293
+ Args:
294
+ token_ids (List[int]): List of token IDs.
295
+ **kwargs: Additional arguments to pass to the decode method.
296
+ Returns:
297
+ str: Decoded string.
298
+ This method also issues a warning if unsupported arguments are provided.
299
+ """
300
+ passed_kwargs = {key: value for (key, value) in kwargs.items() if key in self.decode_kwargs}
301
+ if len(passed_kwargs) != len(kwargs):
302
+ warnings.warn("silently ignoring some arguments to `decode` due to missing " "support from the tokenizer.")
303
+ text = self.decode(token_ids, **passed_kwargs)
304
+ return text
305
+
306
+ def save_tokenizer(self, save_dir: str) -> None:
307
+ if not os.path.isdir(save_dir):
308
+ print(f"Vocabulary path ({save_dir}) should be a directory")
309
+ return
310
+ out_vocab_file = os.path.join(save_dir, "tokenizer.model")
311
+
312
+ # if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
313
+ # copyfile(self.vocab_file, out_vocab_file)
314
+ # elif not os.path.isfile(self.vocab_file):
315
+ with open(out_vocab_file, "wb") as f:
316
+ content_spiece_model = self.tok.serialized_model_proto()
317
+ f.write(content_spiece_model)
318
+
319
+ return (out_vocab_file,)
320
+
321
+ def _decode(
322
+ self,
323
+ token_ids: List[int],
324
+ skip_special_tokens: bool = False,
325
+ clean_up_tokenization_spaces: bool = None,
326
+ spaces_between_special_tokens: bool = True,
327
+ **kwargs: Any,
328
+ ) -> str:
329
+ text = self._tok_decode(
330
+ token_ids,
331
+ skip_special_tokens=skip_special_tokens,
332
+ spaces_between_special_tokens=spaces_between_special_tokens,
333
+ **kwargs,
334
+ )
335
+
336
+ clean_up_tokenization_spaces = (
337
+ clean_up_tokenization_spaces
338
+ if clean_up_tokenization_spaces is not None
339
+ else self.clean_up_tokenization_spaces
340
+ )
341
+ if clean_up_tokenization_spaces:
342
+ warnings.warn(
343
+ "when cleaning up tokenization spaces, this will not behave "
344
+ "like the original `GPTXTokenizer`., Please supply "
345
+ "`clean_up_tokenization_spaces=False` for decoding."
346
+ )
347
+ clean_text = self.clean_up_tokenization(text)
348
+ return clean_text
349
+ else:
350
+ return text
351
+
352
+ def save_vocabulary(
353
+ self,
354
+ save_directory: str,
355
+ filename_prefix: Optional[str] = None,
356
+ ) -> Tuple[str]:
357
+ filename_prefix = filename_prefix + "-" if filename_prefix else ""
358
+ save_directory = Path(save_directory)
359
+
360
+ self._save_tokenizer_config(save_directory, filename_prefix)
361
+ tokenizer_file_path = self._save_tokenizer(save_directory, filename_prefix)
362
+
363
+ return (tokenizer_file_path,)
364
+
365
+ def _save_tokenizer_config(
366
+ self,
367
+ save_directory: Path,
368
+ filename_prefix: str,
369
+ ) -> str:
370
+ self.save_tokenizer_config(save_directory)
371
+ old_tokenizer_config_path = save_directory / TOKENIZER_CONFIG_FILE
372
+ assert old_tokenizer_config_path.is_file(), "tokenizer config path changed"
373
+ new_tokenizer_config_path = save_directory / (filename_prefix + old_tokenizer_config_path.name)
374
+ old_tokenizer_config_path.replace(new_tokenizer_config_path)
375
+ return str(new_tokenizer_config_path)
376
+
377
+ def _find_tokenizer_files(self, save_directory: Path) -> List[Path]:
378
+ files = list(Path(save_directory).glob(self.model_file_glob))
379
+ return files
380
+
381
+ def _get_tokenizer_file(self, files: List[Path]):
382
+ assert files, "no saved tokenizer file found"
383
+ assert len(files) <= 1, "cannot handle multiple saved tokenizer files"
384
+ return files[0]
385
+
386
+ def _save_tokenizer(
387
+ self,
388
+ save_directory: Path,
389
+ filename_prefix: str,
390
+ ) -> str:
391
+ self.save_tokenizer(str(save_directory))
392
+ tokenizer_files = self._find_tokenizer_files(save_directory)
393
+ old_tokenizer_file_path = self._get_tokenizer_file(tokenizer_files)
394
+ assert old_tokenizer_file_path.is_file(), "could not access saved tokenizer file"
395
+ new_tokenizer_file_path = save_directory / (filename_prefix + self.vocab_files_names["tokenizer_file"])
396
+ old_tokenizer_file_path.replace(new_tokenizer_file_path)
397
+ return str(new_tokenizer_file_path)
398
+
399
+ def save_tokenizer_config(self, save_dir: Path) -> None:
400
+ # convert Path to str
401
+ for k in self.tokenizer_config:
402
+ if isinstance(self.tokenizer_config[k], Path):
403
+ self.tokenizer_config[k] = str(self.tokenizer_config[k])
404
+
405
+ info_file = save_dir / "tokenizer_config.json"
406
+ with info_file.open("w") as f:
407
+ json.dump(self.tokenizer_config, f, indent=4)
408
+
409
+ def load_json(self, path: Path) -> dict:
410
+ with path.open("r") as f:
411
+ return json.load(f)
412
+
413
+ class SPTokenizer(HFGPTXTokenizer):
414
+ model_file_glob = "*tokenizer.model"
415
+ vocab_files_names = {"tokenizer_file": "tokenizer.model"}
416
+ decode_kwargs = ["num_threads"]
417
+ # `is_continuation` does not work without this, but it doesn't
418
+ # implement all APIs of `PreTrainedTokenizer`.
419
+ def encode(self, text: str, **kwargs) -> List[int]:
420
+ return_tokens = kwargs.pop('return_tokens', False)
421
+ is_continuation = kwargs.pop('is_continuation', False)
422
+ return self._encode(
423
+ text,
424
+ return_tokens=return_tokens,
425
+ is_continuation=is_continuation,
426
+ )
427
+
428
+ def __init__(self, *args, **kwargs):
429
+ super().__init__(*args, **kwargs)
430
+
431
+ self.eos_token = "</s>"
432
+ self.eos_token_id = 2
special_tokens_map.json ADDED
@@ -0,0 +1,268 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<s>",
4
+ "</s>",
5
+ "<pad>",
6
+ "<eod>",
7
+ "<placeholder_tok_0>",
8
+ "<placeholder_tok_1>",
9
+ "<placeholder_tok_2>",
10
+ "<placeholder_tok_3>",
11
+ "<placeholder_tok_4>",
12
+ "<placeholder_tok_5>",
13
+ "<placeholder_tok_6>",
14
+ "<placeholder_tok_7>",
15
+ "<placeholder_tok_8>",
16
+ "<placeholder_tok_9>",
17
+ "<placeholder_tok_10>",
18
+ "<placeholder_tok_11>",
19
+ "<placeholder_tok_12>",
20
+ "<placeholder_tok_13>",
21
+ "<placeholder_tok_14>",
22
+ "<placeholder_tok_15>",
23
+ "<placeholder_tok_16>",
24
+ "<placeholder_tok_17>",
25
+ "<placeholder_tok_18>",
26
+ "<placeholder_tok_19>",
27
+ "<placeholder_tok_20>",
28
+ "<placeholder_tok_21>",
29
+ "<placeholder_tok_22>",
30
+ "<placeholder_tok_23>",
31
+ "<placeholder_tok_24>",
32
+ "<placeholder_tok_25>",
33
+ "<placeholder_tok_26>",
34
+ "<placeholder_tok_27>",
35
+ "<placeholder_tok_28>",
36
+ "<placeholder_tok_29>",
37
+ "<placeholder_tok_30>",
38
+ "<placeholder_tok_31>",
39
+ "<placeholder_tok_32>",
40
+ "<placeholder_tok_33>",
41
+ "<placeholder_tok_34>",
42
+ "<placeholder_tok_35>",
43
+ "<placeholder_tok_36>",
44
+ "<placeholder_tok_37>",
45
+ "<placeholder_tok_38>",
46
+ "<placeholder_tok_39>",
47
+ "<placeholder_tok_40>",
48
+ "<placeholder_tok_41>",
49
+ "<placeholder_tok_42>",
50
+ "<placeholder_tok_43>",
51
+ "<placeholder_tok_44>",
52
+ "<placeholder_tok_45>",
53
+ "<placeholder_tok_46>",
54
+ "<placeholder_tok_47>",
55
+ "<placeholder_tok_48>",
56
+ "<placeholder_tok_49>",
57
+ "<placeholder_tok_50>",
58
+ "<placeholder_tok_51>",
59
+ "<placeholder_tok_52>",
60
+ "<placeholder_tok_53>",
61
+ "<placeholder_tok_54>",
62
+ "<placeholder_tok_55>",
63
+ "<placeholder_tok_56>",
64
+ "<placeholder_tok_57>",
65
+ "<placeholder_tok_58>",
66
+ "<placeholder_tok_59>",
67
+ "<placeholder_tok_60>",
68
+ "<placeholder_tok_61>",
69
+ "<placeholder_tok_62>",
70
+ "<placeholder_tok_63>",
71
+ "<placeholder_tok_64>",
72
+ "<placeholder_tok_65>",
73
+ "<placeholder_tok_66>",
74
+ "<placeholder_tok_67>",
75
+ "<placeholder_tok_68>",
76
+ "<placeholder_tok_69>",
77
+ "<placeholder_tok_70>",
78
+ "<placeholder_tok_71>",
79
+ "<placeholder_tok_72>",
80
+ "<placeholder_tok_73>",
81
+ "<placeholder_tok_74>",
82
+ "<placeholder_tok_75>",
83
+ "<placeholder_tok_76>",
84
+ "<placeholder_tok_77>",
85
+ "<placeholder_tok_78>",
86
+ "<placeholder_tok_79>",
87
+ "<placeholder_tok_80>",
88
+ "<placeholder_tok_81>",
89
+ "<placeholder_tok_82>",
90
+ "<placeholder_tok_83>",
91
+ "<placeholder_tok_84>",
92
+ "<placeholder_tok_85>",
93
+ "<placeholder_tok_86>",
94
+ "<placeholder_tok_87>",
95
+ "<placeholder_tok_88>",
96
+ "<placeholder_tok_89>",
97
+ "<placeholder_tok_90>",
98
+ "<placeholder_tok_91>",
99
+ "<placeholder_tok_92>",
100
+ "<placeholder_tok_93>",
101
+ "<placeholder_tok_94>",
102
+ "<placeholder_tok_95>",
103
+ "<placeholder_tok_96>",
104
+ "<placeholder_tok_97>",
105
+ "<placeholder_tok_98>",
106
+ "<placeholder_tok_99>",
107
+ "<placeholder_tok_100>",
108
+ "<placeholder_tok_101>",
109
+ "<placeholder_tok_102>",
110
+ "<placeholder_tok_103>",
111
+ "<placeholder_tok_104>",
112
+ "<placeholder_tok_105>",
113
+ "<placeholder_tok_106>",
114
+ "<placeholder_tok_107>",
115
+ "<placeholder_tok_108>",
116
+ "<placeholder_tok_109>",
117
+ "<placeholder_tok_110>",
118
+ "<placeholder_tok_111>",
119
+ "<placeholder_tok_112>",
120
+ "<placeholder_tok_113>",
121
+ "<placeholder_tok_114>",
122
+ "<placeholder_tok_115>",
123
+ "<placeholder_tok_116>",
124
+ "<placeholder_tok_117>",
125
+ "<placeholder_tok_118>",
126
+ "<placeholder_tok_119>",
127
+ "<placeholder_tok_120>",
128
+ "<placeholder_tok_121>",
129
+ "<placeholder_tok_122>",
130
+ "<placeholder_tok_123>",
131
+ "<placeholder_tok_124>",
132
+ "<placeholder_tok_125>",
133
+ "<placeholder_tok_126>",
134
+ "<placeholder_tok_127>",
135
+ "<placeholder_tok_128>",
136
+ "<placeholder_tok_129>",
137
+ "<placeholder_tok_130>",
138
+ "<placeholder_tok_131>",
139
+ "<placeholder_tok_132>",
140
+ "<placeholder_tok_133>",
141
+ "<placeholder_tok_134>",
142
+ "<placeholder_tok_135>",
143
+ "<placeholder_tok_136>",
144
+ "<placeholder_tok_137>",
145
+ "<placeholder_tok_138>",
146
+ "<placeholder_tok_139>",
147
+ "<placeholder_tok_140>",
148
+ "<placeholder_tok_141>",
149
+ "<placeholder_tok_142>",
150
+ "<placeholder_tok_143>",
151
+ "<placeholder_tok_144>",
152
+ "<placeholder_tok_145>",
153
+ "<placeholder_tok_146>",
154
+ "<placeholder_tok_147>",
155
+ "<placeholder_tok_148>",
156
+ "<placeholder_tok_149>",
157
+ "<placeholder_tok_150>",
158
+ "<placeholder_tok_151>",
159
+ "<placeholder_tok_152>",
160
+ "<placeholder_tok_153>",
161
+ "<placeholder_tok_154>",
162
+ "<placeholder_tok_155>",
163
+ "<placeholder_tok_156>",
164
+ "<placeholder_tok_157>",
165
+ "<placeholder_tok_158>",
166
+ "<placeholder_tok_159>",
167
+ "<placeholder_tok_160>",
168
+ "<placeholder_tok_161>",
169
+ "<placeholder_tok_162>",
170
+ "<placeholder_tok_163>",
171
+ "<placeholder_tok_164>",
172
+ "<placeholder_tok_165>",
173
+ "<placeholder_tok_166>",
174
+ "<placeholder_tok_167>",
175
+ "<placeholder_tok_168>",
176
+ "<placeholder_tok_169>",
177
+ "<placeholder_tok_170>",
178
+ "<placeholder_tok_171>",
179
+ "<placeholder_tok_172>",
180
+ "<placeholder_tok_173>",
181
+ "<placeholder_tok_174>",
182
+ "<placeholder_tok_175>",
183
+ "<placeholder_tok_176>",
184
+ "<placeholder_tok_177>",
185
+ "<placeholder_tok_178>",
186
+ "<placeholder_tok_179>",
187
+ "<placeholder_tok_180>",
188
+ "<placeholder_tok_181>",
189
+ "<placeholder_tok_182>",
190
+ "<placeholder_tok_183>",
191
+ "<placeholder_tok_184>",
192
+ "<placeholder_tok_185>",
193
+ "<placeholder_tok_186>",
194
+ "<placeholder_tok_187>",
195
+ "<placeholder_tok_188>",
196
+ "<placeholder_tok_189>",
197
+ "<placeholder_tok_190>",
198
+ "<placeholder_tok_191>",
199
+ "<placeholder_tok_192>",
200
+ "<placeholder_tok_193>",
201
+ "<placeholder_tok_194>",
202
+ "<placeholder_tok_195>",
203
+ "<placeholder_tok_196>",
204
+ "<placeholder_tok_197>",
205
+ "<placeholder_tok_198>",
206
+ "<placeholder_tok_199>",
207
+ "<placeholder_tok_200>",
208
+ "<placeholder_tok_201>",
209
+ "<placeholder_tok_202>",
210
+ "<placeholder_tok_203>",
211
+ "<placeholder_tok_204>",
212
+ "<placeholder_tok_205>",
213
+ "<placeholder_tok_206>",
214
+ "<placeholder_tok_207>",
215
+ "<placeholder_tok_208>",
216
+ "<placeholder_tok_209>",
217
+ "<placeholder_tok_210>",
218
+ "<placeholder_tok_211>",
219
+ "<placeholder_tok_212>",
220
+ "<placeholder_tok_213>",
221
+ "<placeholder_tok_214>",
222
+ "<placeholder_tok_215>",
223
+ "<placeholder_tok_216>",
224
+ "<placeholder_tok_217>",
225
+ "<placeholder_tok_218>",
226
+ "<placeholder_tok_219>",
227
+ "<placeholder_tok_220>",
228
+ "<placeholder_tok_221>",
229
+ "<placeholder_tok_222>",
230
+ "<placeholder_tok_223>",
231
+ "<placeholder_tok_224>",
232
+ "<placeholder_tok_225>",
233
+ "<placeholder_tok_226>",
234
+ "<placeholder_tok_227>",
235
+ "<placeholder_tok_228>",
236
+ "<placeholder_tok_229>",
237
+ "<placeholder_tok_230>",
238
+ "<placeholder_tok_231>",
239
+ "<placeholder_tok_232>",
240
+ "<placeholder_tok_233>",
241
+ "<placeholder_tok_234>",
242
+ "<placeholder_tok_235>",
243
+ "<placeholder_tok_236>",
244
+ "<placeholder_tok_237>",
245
+ "<placeholder_tok_238>",
246
+ "<placeholder_tok_239>",
247
+ "<placeholder_tok_240>",
248
+ "<placeholder_tok_241>",
249
+ "<placeholder_tok_242>",
250
+ "<placeholder_tok_243>",
251
+ "<placeholder_tok_244>",
252
+ "<placeholder_tok_245>",
253
+ "<placeholder_tok_246>",
254
+ "<placeholder_tok_247>",
255
+ "<placeholder_tok_248>",
256
+ "<placeholder_tok_249>",
257
+ "<placeholder_tok_250>",
258
+ "<placeholder_tok_251>",
259
+ "<placeholder_tok_252>",
260
+ "<placeholder_tok_253>",
261
+ "<placeholder_tok_254>",
262
+ "<placeholder_tok_255>"
263
+ ],
264
+ "bos_token": "<s>",
265
+ "eos_token": "</s>",
266
+ "pad_token": "<pad>",
267
+ "unk_token": "<unk>"
268
+ }
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08d0c8316539a853f2fe6e14f51f0df583011dfb078fa08c8b6dc5c15a19a7e6
3
+ size 4719922
tokenizer_config.json ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "num_threads": 224,
3
+ "split_by_whitespace": true,
4
+ "model_type": "unigram",
5
+ "vocab_size": 250680,
6
+ "character_coverage": 0.9999,
7
+ "byte_fallback": true,
8
+ "split_by_number": true,
9
+ "split_digits": true,
10
+ "normalization_rule_name": "nfkc",
11
+ "max_sentence_length": 4096,
12
+ "shuffle_input_sentence": true,
13
+ "input_sentence_size": 0,
14
+ "train_extremely_large_corpus": true,
15
+ "allow_whitespace_only_pieces": true,
16
+ "required_chars": "",
17
+ "remove_extra_whitespaces": false,
18
+ "user_defined_symbols": [
19
+ "<s>",
20
+ "</s>",
21
+ "<pad>",
22
+ "<eod>",
23
+ "<placeholder_tok_0>",
24
+ "<placeholder_tok_1>",
25
+ "<placeholder_tok_2>",
26
+ "<placeholder_tok_3>",
27
+ "<placeholder_tok_4>",
28
+ "<placeholder_tok_5>",
29
+ "<placeholder_tok_6>",
30
+ "<placeholder_tok_7>",
31
+ "<placeholder_tok_8>",
32
+ "<placeholder_tok_9>",
33
+ "<placeholder_tok_10>",
34
+ "<placeholder_tok_11>",
35
+ "<placeholder_tok_12>",
36
+ "<placeholder_tok_13>",
37
+ "<placeholder_tok_14>",
38
+ "<placeholder_tok_15>",
39
+ "<placeholder_tok_16>",
40
+ "<placeholder_tok_17>",
41
+ "<placeholder_tok_18>",
42
+ "<placeholder_tok_19>",
43
+ "<placeholder_tok_20>",
44
+ "<placeholder_tok_21>",
45
+ "<placeholder_tok_22>",
46
+ "<placeholder_tok_23>",
47
+ "<placeholder_tok_24>",
48
+ "<placeholder_tok_25>",
49
+ "<placeholder_tok_26>",
50
+ "<placeholder_tok_27>",
51
+ "<placeholder_tok_28>",
52
+ "<placeholder_tok_29>",
53
+ "<placeholder_tok_30>",
54
+ "<placeholder_tok_31>",
55
+ "<placeholder_tok_32>",
56
+ "<placeholder_tok_33>",
57
+ "<placeholder_tok_34>",
58
+ "<placeholder_tok_35>",
59
+ "<placeholder_tok_36>",
60
+ "<placeholder_tok_37>",
61
+ "<placeholder_tok_38>",
62
+ "<placeholder_tok_39>",
63
+ "<placeholder_tok_40>",
64
+ "<placeholder_tok_41>",
65
+ "<placeholder_tok_42>",
66
+ "<placeholder_tok_43>",
67
+ "<placeholder_tok_44>",
68
+ "<placeholder_tok_45>",
69
+ "<placeholder_tok_46>",
70
+ "<placeholder_tok_47>",
71
+ "<placeholder_tok_48>",
72
+ "<placeholder_tok_49>",
73
+ "<placeholder_tok_50>",
74
+ "<placeholder_tok_51>",
75
+ "<placeholder_tok_52>",
76
+ "<placeholder_tok_53>",
77
+ "<placeholder_tok_54>",
78
+ "<placeholder_tok_55>",
79
+ "<placeholder_tok_56>",
80
+ "<placeholder_tok_57>",
81
+ "<placeholder_tok_58>",
82
+ "<placeholder_tok_59>",
83
+ "<placeholder_tok_60>",
84
+ "<placeholder_tok_61>",
85
+ "<placeholder_tok_62>",
86
+ "<placeholder_tok_63>",
87
+ "<placeholder_tok_64>",
88
+ "<placeholder_tok_65>",
89
+ "<placeholder_tok_66>",
90
+ "<placeholder_tok_67>",
91
+ "<placeholder_tok_68>",
92
+ "<placeholder_tok_69>",
93
+ "<placeholder_tok_70>",
94
+ "<placeholder_tok_71>",
95
+ "<placeholder_tok_72>",
96
+ "<placeholder_tok_73>",
97
+ "<placeholder_tok_74>",
98
+ "<placeholder_tok_75>",
99
+ "<placeholder_tok_76>",
100
+ "<placeholder_tok_77>",
101
+ "<placeholder_tok_78>",
102
+ "<placeholder_tok_79>",
103
+ "<placeholder_tok_80>",
104
+ "<placeholder_tok_81>",
105
+ "<placeholder_tok_82>",
106
+ "<placeholder_tok_83>",
107
+ "<placeholder_tok_84>",
108
+ "<placeholder_tok_85>",
109
+ "<placeholder_tok_86>",
110
+ "<placeholder_tok_87>",
111
+ "<placeholder_tok_88>",
112
+ "<placeholder_tok_89>",
113
+ "<placeholder_tok_90>",
114
+ "<placeholder_tok_91>",
115
+ "<placeholder_tok_92>",
116
+ "<placeholder_tok_93>",
117
+ "<placeholder_tok_94>",
118
+ "<placeholder_tok_95>",
119
+ "<placeholder_tok_96>",
120
+ "<placeholder_tok_97>",
121
+ "<placeholder_tok_98>",
122
+ "<placeholder_tok_99>",
123
+ "<placeholder_tok_100>",
124
+ "<placeholder_tok_101>",
125
+ "<placeholder_tok_102>",
126
+ "<placeholder_tok_103>",
127
+ "<placeholder_tok_104>",
128
+ "<placeholder_tok_105>",
129
+ "<placeholder_tok_106>",
130
+ "<placeholder_tok_107>",
131
+ "<placeholder_tok_108>",
132
+ "<placeholder_tok_109>",
133
+ "<placeholder_tok_110>",
134
+ "<placeholder_tok_111>",
135
+ "<placeholder_tok_112>",
136
+ "<placeholder_tok_113>",
137
+ "<placeholder_tok_114>",
138
+ "<placeholder_tok_115>",
139
+ "<placeholder_tok_116>",
140
+ "<placeholder_tok_117>",
141
+ "<placeholder_tok_118>",
142
+ "<placeholder_tok_119>",
143
+ "<placeholder_tok_120>",
144
+ "<placeholder_tok_121>",
145
+ "<placeholder_tok_122>",
146
+ "<placeholder_tok_123>",
147
+ "<placeholder_tok_124>",
148
+ "<placeholder_tok_125>",
149
+ "<placeholder_tok_126>",
150
+ "<placeholder_tok_127>",
151
+ "<placeholder_tok_128>",
152
+ "<placeholder_tok_129>",
153
+ "<placeholder_tok_130>",
154
+ "<placeholder_tok_131>",
155
+ "<placeholder_tok_132>",
156
+ "<placeholder_tok_133>",
157
+ "<placeholder_tok_134>",
158
+ "<placeholder_tok_135>",
159
+ "<placeholder_tok_136>",
160
+ "<placeholder_tok_137>",
161
+ "<placeholder_tok_138>",
162
+ "<placeholder_tok_139>",
163
+ "<placeholder_tok_140>",
164
+ "<placeholder_tok_141>",
165
+ "<placeholder_tok_142>",
166
+ "<placeholder_tok_143>",
167
+ "<placeholder_tok_144>",
168
+ "<placeholder_tok_145>",
169
+ "<placeholder_tok_146>",
170
+ "<placeholder_tok_147>",
171
+ "<placeholder_tok_148>",
172
+ "<placeholder_tok_149>",
173
+ "<placeholder_tok_150>",
174
+ "<placeholder_tok_151>",
175
+ "<placeholder_tok_152>",
176
+ "<placeholder_tok_153>",
177
+ "<placeholder_tok_154>",
178
+ "<placeholder_tok_155>",
179
+ "<placeholder_tok_156>",
180
+ "<placeholder_tok_157>",
181
+ "<placeholder_tok_158>",
182
+ "<placeholder_tok_159>",
183
+ "<placeholder_tok_160>",
184
+ "<placeholder_tok_161>",
185
+ "<placeholder_tok_162>",
186
+ "<placeholder_tok_163>",
187
+ "<placeholder_tok_164>",
188
+ "<placeholder_tok_165>",
189
+ "<placeholder_tok_166>",
190
+ "<placeholder_tok_167>",
191
+ "<placeholder_tok_168>",
192
+ "<placeholder_tok_169>",
193
+ "<placeholder_tok_170>",
194
+ "<placeholder_tok_171>",
195
+ "<placeholder_tok_172>",
196
+ "<placeholder_tok_173>",
197
+ "<placeholder_tok_174>",
198
+ "<placeholder_tok_175>",
199
+ "<placeholder_tok_176>",
200
+ "<placeholder_tok_177>",
201
+ "<placeholder_tok_178>",
202
+ "<placeholder_tok_179>",
203
+ "<placeholder_tok_180>",
204
+ "<placeholder_tok_181>",
205
+ "<placeholder_tok_182>",
206
+ "<placeholder_tok_183>",
207
+ "<placeholder_tok_184>",
208
+ "<placeholder_tok_185>",
209
+ "<placeholder_tok_186>",
210
+ "<placeholder_tok_187>",
211
+ "<placeholder_tok_188>",
212
+ "<placeholder_tok_189>",
213
+ "<placeholder_tok_190>",
214
+ "<placeholder_tok_191>",
215
+ "<placeholder_tok_192>",
216
+ "<placeholder_tok_193>",
217
+ "<placeholder_tok_194>",
218
+ "<placeholder_tok_195>",
219
+ "<placeholder_tok_196>",
220
+ "<placeholder_tok_197>",
221
+ "<placeholder_tok_198>",
222
+ "<placeholder_tok_199>",
223
+ "<placeholder_tok_200>",
224
+ "<placeholder_tok_201>",
225
+ "<placeholder_tok_202>",
226
+ "<placeholder_tok_203>",
227
+ "<placeholder_tok_204>",
228
+ "<placeholder_tok_205>",
229
+ "<placeholder_tok_206>",
230
+ "<placeholder_tok_207>",
231
+ "<placeholder_tok_208>",
232
+ "<placeholder_tok_209>",
233
+ "<placeholder_tok_210>",
234
+ "<placeholder_tok_211>",
235
+ "<placeholder_tok_212>",
236
+ "<placeholder_tok_213>",
237
+ "<placeholder_tok_214>",
238
+ "<placeholder_tok_215>",
239
+ "<placeholder_tok_216>",
240
+ "<placeholder_tok_217>",
241
+ "<placeholder_tok_218>",
242
+ "<placeholder_tok_219>",
243
+ "<placeholder_tok_220>",
244
+ "<placeholder_tok_221>",
245
+ "<placeholder_tok_222>",
246
+ "<placeholder_tok_223>",
247
+ "<placeholder_tok_224>",
248
+ "<placeholder_tok_225>",
249
+ "<placeholder_tok_226>",
250
+ "<placeholder_tok_227>",
251
+ "<placeholder_tok_228>",
252
+ "<placeholder_tok_229>",
253
+ "<placeholder_tok_230>",
254
+ "<placeholder_tok_231>",
255
+ "<placeholder_tok_232>",
256
+ "<placeholder_tok_233>",
257
+ "<placeholder_tok_234>",
258
+ "<placeholder_tok_235>",
259
+ "<placeholder_tok_236>",
260
+ "<placeholder_tok_237>",
261
+ "<placeholder_tok_238>",
262
+ "<placeholder_tok_239>",
263
+ "<placeholder_tok_240>",
264
+ "<placeholder_tok_241>",
265
+ "<placeholder_tok_242>",
266
+ "<placeholder_tok_243>",
267
+ "<placeholder_tok_244>",
268
+ "<placeholder_tok_245>",
269
+ "<placeholder_tok_246>",
270
+ "<placeholder_tok_247>",
271
+ "<placeholder_tok_248>",
272
+ "<placeholder_tok_249>",
273
+ "<placeholder_tok_250>",
274
+ "<placeholder_tok_251>",
275
+ "<placeholder_tok_252>",
276
+ "<placeholder_tok_253>",
277
+ "<placeholder_tok_254>",
278
+ "<placeholder_tok_255>"
279
+ ],
280
+ "datasets_dir": "/home/fhgiais/gptx_ablations/bias_analysis/data/tokenizer/temp/",
281
+ "save_dir": "/home/fhgiais/gptx_ablations/bias_analysis/tokenizer/24",
282
+ "text_key": "text",
283
+ "cache_dir": "/home/fhgiais/gptx_ablations/bias_analysis/tokenizer/24/cache",
284
+ "library": "sentencepiece",
285
+ "auto_map": {
286
+ "AutoTokenizer": [
287
+ "gptx_tokenizer.SPTokenizer",
288
+ null
289
+ ]
290
+ },
291
+ "tokenizer_class": "SPTokenizer"
292
+ }