Jeronymous commited on
Commit
8318e86
1 Parent(s): 1f94780

initial commit

Browse files
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fr
4
+ license: apache-2.0
5
+ pipeline_tag: text-generation
6
+ base_model: tiiuae/falcon-7b
7
+ tags:
8
+ - pretrained
9
+ - conversational
10
+ widget:
11
+ - text: |-
12
+ - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
13
+ - Bonjour Camille,
14
+ example_title: Request for a recipe
15
+ group: Dash
16
+ - text: |-
17
+ [Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
18
+ [Intervenant 2:] Bonjour Camille,
19
+ example_title: Request for a recipe
20
+ group: Intervenant
21
+ - text: |-
22
+ [Camille:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
23
+ [Dominique:] Bonjour Camille,
24
+ example_title: Request for a recipe
25
+ group: FirstName
26
+ - text: |-
27
+ [Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
28
+ [Dominique Petit:] Bonjour Camille,
29
+ example_title: Request for a recipe
30
+ group: Named
31
+ inference:
32
+ parameters:
33
+ temperature: 1.0
34
+ max_new_tokens: 200
35
+ top_k: 10
36
+ ---
37
+
38
+ # Claire-7B-Apache-0.1
39
+
40
+ **Claire-7B-Apache-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) and [OpenLLM-France](https://github.com/OpenLLM-France)**
41
+ **adapted from [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on French conversational open data.**
42
+
43
+ Claire-7B-Apache-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language.
44
+
45
+ This model is made available under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0).
46
+
47
+ It is a variant of [Claire-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-0.1), which is trained on a larger quantity of French conversational data,
48
+ but published under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
49
+
50
+ * [Typical usage](#typical-usage)
51
+ * [Typical prompts](#typical-prompts)
52
+ * [Training Details](#training-details)
53
+ * [Training Data](#training-data)
54
+ * [Training Procedure](#training-procedure)
55
+ * [Evaluation](#evaluation)
56
+ * [License](#license)
57
+ * [Acknowledgements](#acknowledgements)
58
+ * [Contact](#contact)
59
+
60
+ ## Typical usage
61
+
62
+ ```python
63
+ import transformers
64
+ import torch
65
+
66
+ model_name = "OpenLLM-France/Claire-7B-Apache-0.1"
67
+
68
+ tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
69
+ model = transformers.AutoModelForCausalLM.from_pretrained(model_name,
70
+ device_map="auto",
71
+ torch_dtype=torch.bfloat16,
72
+ load_in_4bit=True # For efficient inference, if supported by the GPU card
73
+ )
74
+
75
+ pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer)
76
+ generation_kwargs = dict(
77
+ num_return_sequences=1, # Number of variants to generate.
78
+ return_full_text= False, # Do not include the prompt in the generated text.
79
+ max_new_tokens=200, # Maximum length for the output text.
80
+ do_sample=True, top_k=10, temperature=1.0, # Sampling parameters.
81
+ pad_token_id=tokenizer.eos_token_id, # Just to avoid a harmless warning.
82
+ )
83
+
84
+ prompt = """\
85
+ - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
86
+ - Bonjour Camille,\
87
+ """
88
+ completions = pipeline(prompt, **generation_kwargs)
89
+ for completion in completions:
90
+ print(prompt + " […]" + completion['generated_text'])
91
+ ```
92
+ This will print something like:
93
+ ```
94
+ - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
95
+ - Bonjour Camille, […] je vous prépare un plat de saison, une daube provençale.
96
+ - Ah je ne connais pas cette recette.
97
+ - C'est très facile à préparer, vous n'avez qu'à mettre de l'eau dans une marmite, y mettre de l'oignon émincé, des carottes coupées en petits morceaux, et vous allez mettre votre viande de bœuf coupé en petits morceaux également.
98
+ - Je n'ai jamais cuisiné de viande de bœuf, mais c'est vrai que ça a l'air bien facile.
99
+ - Vous n'avez plus qu'à laisser mijoter, et ensuite il sera temps de servir les clients.
100
+ - Très bien.
101
+ ```
102
+
103
+ You will need at least 6GB of VRAM to run inference using 4bit quantization (16GB of VRAM without 4bit quantization).
104
+
105
+ If you have trouble running this code, make sure you have recent versions of `torch`, `transformers` and `accelerate` (see [requirements.txt](requirements.txt)).
106
+
107
+ ### Typical prompts
108
+
109
+ Claire-7B-Apache-0.1 was trained on diarized French conversations. During training, the dialogues were normalized in several formats. The possible formats for expected prompts are as follows:
110
+
111
+ A monologue can be specified as a single line prompt (though keep in mind that Claire might still return a dialogue because of its training):
112
+ ```python
113
+ prompt = "Mesdames et messieurs les députés, chers collègues, bonsoir. Vous l'aurez peut-être remarqué, je cite rarement"
114
+ ```
115
+
116
+ A dialogue between two speakers can be specified with one line per speech turn starting with a dash:
117
+ ```python
118
+ prompt = """\
119
+ - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
120
+ - Bonjour Camille,\
121
+ """
122
+ ```
123
+
124
+ A dialogue or multilogue (with two or more speakers) can be specified with lines that start with `[Intervenant X:]` where `X` is a number:
125
+ ```python
126
+ prompt = """\
127
+ [Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
128
+ [Intervenant 2:] Bonjour Camille,\
129
+ """
130
+ ```
131
+
132
+ A dialogue or multilogue with named speakers can be specified with lines that start with `[SpeakerName:]`
133
+ where `SpeakerName` can be a first name, a first and a last name, a nickname, a title…
134
+ ```python
135
+ prompt = """\
136
+ [Mme Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
137
+ [Mr. Dominique Petit:] Bonjour Camille,\
138
+ """
139
+ ```
140
+
141
+ ## Training Details
142
+
143
+ ### Training Data
144
+
145
+ The training dataset will be made available soon.
146
+
147
+ Claire-7B-Apache-0.1 was tuned from Falcon-7b on the following data distribution:
148
+
149
+ | **Data type** | **Words** | **Training Sampling Weight** | **Sources** |
150
+ |-----------------------------------------|------------|------------------------------|-------------------------------------------|
151
+ | Parliamentary Proceedings | 135M | 54% | Assemblée Nationale |
152
+ | Theatre | 2.7M | 23% | Théâtre Gratuit |
153
+ | Meetings | 1.0M | 16.6% | SUMM-RE, LinTO |
154
+ | Debates | 326k | 5.4% | FreDSum |
155
+ | Presentations, Conversations | 58k | 1% | LinTO |
156
+
157
+ Training data was augmented with the following techniques:
158
+ * varying the format used to indicate speech turns (dashes or [XXX:])
159
+ * substituting [Intervenant X:] for [SpeakerName:] or vice versa, where [SpeakerName:] might be a real name or a randomly generated name
160
+ * removing punctuation marks and/or casing (to prepare the model for transcripts produced by some Automatic Speech Recognition systems)
161
+
162
+ Long conversations were truncated at a maximum of 2048 tokens. Where possible, they were split between speaker turns.
163
+
164
+ While the model has been trained and evaluated only on French dialogues, it may be able to generate conversations in other languages from the original Falcon-7b training data.
165
+
166
+ ### Training Procedure
167
+
168
+ The training code will be made available soon.
169
+
170
+ Claire-7B-Apache-0.1 is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
171
+ See [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) for more details.
172
+
173
+ Claire-7B-Apache-0.1 was trained on 8 A100 80GB GPUs for about 50 GPU hours.
174
+
175
+ Hyperparameters were the following:
176
+
177
+ | **Hyperparameter** | **Value** |
178
+ |--------------------|------------|
179
+ | Precision | `bfloat16` |
180
+ | Optimizer | AdamW |
181
+ | Learning rate | 1e-4 |
182
+ | Weight decay | 1e-2 |
183
+ | Batch size | 128 |
184
+ | LoRA rank | 16 |
185
+ | LoRA alpha | 32 |
186
+ | Dropout | 0.05 |
187
+ | gradient clipping | 1 |
188
+
189
+ ## Evaluation
190
+
191
+ See the [Evaluation section of Claire-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-0.1#evaluation).
192
+
193
+ ## License
194
+
195
+ Claire-7B-Apache-0.1 is made available under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0).
196
+
197
+ You can find a variant of this model trained on more data but published under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/)
198
+ at [OpenLLM-France/Claire-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-0.1).
199
+
200
+
201
+ ## Acknowledgements
202
+
203
+ This work was performed using HPC resources from GENCI–IDRIS (Grant 2023-AD011014561).
204
+
205
+ Claire-7B-Apache-0.1 was created by members of [LINAGORA](https://labs.linagora.com/) (in alphabetical order): Ismaïl Harrando, Julie Hunter, Jean-Pierre Lorré, Jérôme Louradour, Michel-Marie Maudet, Virgile Rennard, Guokan Shang.
206
+
207
+ Special thanks to partners from the OpenLLM-France community, especially Christophe Cerisara (LORIA), Pierre-Carl Langlais and Anastasia Stasenko (OpSci), and Pierre Colombo, for valuable advice.
208
+
209
+ ## Contact
210
+
211
config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alibi": false,
3
+ "apply_residual_connection_post_layernorm": false,
4
+ "architectures": [
5
+ "FalconForCausalLM"
6
+ ],
7
+ "attention_dropout": 0.0,
8
+ "auto_map": {
9
+ "AutoConfig": "configuration_falcon.FalconConfig",
10
+ "AutoModel": "modeling_falcon.FalconModel",
11
+ "AutoModelForSequenceClassification": "modeling_falcon.FalconForSequenceClassification",
12
+ "AutoModelForTokenClassification": "modeling_falcon.FalconForTokenClassification",
13
+ "AutoModelForQuestionAnswering": "modeling_falcon.FalconForQuestionAnswering",
14
+ "AutoModelForCausalLM": "modeling_falcon.FalconForCausalLM"
15
+ },
16
+ "bias": false,
17
+ "bos_token_id": 11,
18
+ "eos_token_id": 11,
19
+ "hidden_dropout": 0.0,
20
+ "hidden_size": 4544,
21
+ "initializer_range": 0.02,
22
+ "layer_norm_epsilon": 1e-05,
23
+ "model_type": "falcon",
24
+ "multi_query": true,
25
+ "new_decoder_architecture": false,
26
+ "num_attention_heads": 71,
27
+ "num_hidden_layers": 32,
28
+ "parallel_attn": true,
29
+ "torch_dtype": "bfloat16",
30
+ "transformers_version": "4.27.4",
31
+ "use_cache": true,
32
+ "vocab_size": 65024
33
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 11,
4
+ "eos_token_id": 11,
5
+ "transformers_version": "4.34.0"
6
+ }
handler.py ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch, transformers
2
+ from typing import Any, Dict
3
+ from transformers import AutoTokenizer, AutoModelForCausalLM
4
+ import re
5
+ import unicodedata
6
+
7
+
8
+ class EndpointHandler:
9
+ def __init__(self, path):
10
+ tokenizer = AutoTokenizer.from_pretrained(path)
11
+ model = AutoModelForCausalLM.from_pretrained(
12
+ path, device_map="auto", torch_dtype=torch.bfloat16, load_in_4bit=True
13
+ )
14
+ self.pipeline = transformers.pipeline(
15
+ "text-generation", model=model, tokenizer=tokenizer
16
+ )
17
+
18
+ def __call__(self, data: Dict[str, Any]) -> Dict[str, str]:
19
+ # process input
20
+ inputs = data.pop("inputs", data)
21
+
22
+ # default parameters
23
+ parameters = {
24
+ "max_new_tokens": 128,
25
+ "do_sample": True,
26
+ "top_k": 10,
27
+ "temperature": 1.0,
28
+ "return_full_text": False,
29
+ }
30
+
31
+ # user parameters
32
+ parameters.update(data.pop("parameters", {}))
33
+
34
+ unique = isinstance(inputs, str)
35
+ inputs, denormalize_funcs = claire_text_preproc_conversation(inputs)
36
+
37
+ sequences = self.pipeline(inputs, **parameters)
38
+
39
+ if unique:
40
+ return [{"generated_text": denormalize_funcs(sequences[0]["generated_text"])}]
41
+ else:
42
+ assert len(denormalize_funcs) == len(sequences)
43
+ return [{"generated_text": denormalize_func(seq[0]["generated_text"])} for denormalize_func, seq in zip(denormalize_funcs, sequences)]
44
+
45
+
46
+ def claire_text_preproc_conversation(text):
47
+ if isinstance(text, (list, tuple)):
48
+ assert len(text)
49
+ # Apply and transpose
50
+ texts, denormalize_funcs = zip(*[claire_text_preproc_conversation(t) for t in text])
51
+ return list(texts), list(denormalize_funcs)
52
+
53
+ if not isinstance(text, str):
54
+ return text
55
+
56
+ text = format_special_characters(text)
57
+
58
+ text = re.sub(" - | -$|^- ", " ", text.strip(" "))
59
+
60
+ global _reverse_tag_transfo
61
+ _reverse_tag_transfo = {}
62
+ text = format_special_tags(text)
63
+
64
+ text = collapse_whitespaces_conversations(text)
65
+
66
+ if _reverse_tag_transfo:
67
+ reverse_tag_transfo = _reverse_tag_transfo.copy()
68
+ def denormalize_func(t):
69
+ for k, v in reverse_tag_transfo.items():
70
+ if k in t:
71
+ t = t.replace(k, v)
72
+ return t
73
+
74
+ return text, lambda x: denormalize_func(x)
75
+
76
+ else:
77
+ return text, lambda x: x
78
+
79
+
80
+ _brackets = re.compile(r"\[([^\]]*)\]")
81
+ _pattern_speaker = re.compile(r"[^\]]+:")
82
+
83
+ # Global variable to remember some normalizations that were done and apply it back
84
+ _reverse_tag_transfo = {}
85
+ _anonymized_prefix = None
86
+
87
+
88
+ def format_special_tags(text):
89
+ global _reverse_tag_transfo, _anonymized_prefix
90
+ _anonymized_prefix = None
91
+ text = re.sub(_brackets, _format_special_tags, text)
92
+ # At last the generic anonymization
93
+ if _anonymized_prefix:
94
+ _reverse_tag_transfo["[Intervenant "] = _anonymized_prefix
95
+ return text
96
+
97
+
98
+ def _format_special_tags(match):
99
+ content_within_brackets = match.group(1)
100
+ if re.match(_pattern_speaker, content_within_brackets):
101
+ return _format_tag(match.group())
102
+ else:
103
+ return ""
104
+
105
+ def _format_tag(text):
106
+ global _reverse_tag_transfo, _anonymized_prefix
107
+ if text.endswith(":]"):
108
+ anonymized_spk_prefixes = ["speaker", "spk", "locuteur"]
109
+ # Conversion "[speaker001:]" -> "[Intervenant 1:]"
110
+ for prefix in anonymized_spk_prefixes:
111
+ if text.lower().startswith("["+prefix):
112
+ try:
113
+ index = int(text[len(prefix)+1:-2])
114
+ except ValueError:
115
+ return text
116
+ new_spk_tag = f"[Intervenant {index}:]"
117
+ _reverse_tag_transfo[new_spk_tag] = text
118
+ if _anonymized_prefix is None:
119
+ prefix = "["+prefix
120
+ while len(prefix) < len(text) and text[len(prefix)] in " 0":
121
+ prefix += text[len(prefix)]
122
+ _anonymized_prefix = prefix
123
+ return "\n" + new_spk_tag
124
+
125
+ # Capitalize speaker name
126
+ speaker = text[1:-2]
127
+ speaker = capitalize(speaker)
128
+ new_spk_tag = f"[{speaker}:]"
129
+ if text != new_spk_tag:
130
+ _reverse_tag_transfo[new_spk_tag] = text
131
+ return "\n" + new_spk_tag
132
+
133
+ # if text == "[PII]":
134
+ # return "[Nom]"
135
+ # if text == "[NOISE]":
136
+ # return "[bruit]"
137
+ # if text == "[LAUGHTER]":
138
+ # return "[rire]"
139
+
140
+ return ""
141
+
142
+
143
+ def capitalize(text):
144
+ # Custom capitalization for first and last names
145
+ words = text.split(" ")
146
+ words = [w.capitalize() if (not w.isupper() or len(w) > 2) else w for w in words]
147
+ for i, w in enumerate(words):
148
+ for sep in "-", "'":
149
+ if sep in w:
150
+ words[i] = sep.join(
151
+ [x.capitalize() if not x.isupper() else x for x in w.split(sep)]
152
+ )
153
+ return " ".join(words)
154
+
155
+
156
+ def collapse_whitespaces_conversations(text):
157
+ text = re.sub(r"\n+", "\n", text)
158
+ text = re.sub(r"[ \t]+", " ", text)
159
+ text = re.sub(r"\n ", "\n", text)
160
+ text = re.sub(r" ([\.,])", r"\1", text)
161
+ return text.lstrip().rstrip(" ")
162
+
163
+
164
+ def format_special_characters(text):
165
+ text = unicodedata.normalize("NFC", text)
166
+ for before, after in [
167
+ ("…", "..."),
168
+ (r"[«“][^\S\r\n]*", '"'),
169
+ (r"[^\S\r\n]*[»”″„]", '"'),
170
+ (r"(``|'')", '"'),
171
+ (r"[’‘‛ʿ]", "'"),
172
+ ("‚", ","),
173
+ (r"–", "-"),
174
+ ("[  ]", " "), # unbreakable spaces
175
+ (r"[\x00-\x08\x0B\x0C\x0E-\x1F\x7F-\x9F]", ""), # non-printable characters
176
+ # ("·", "."),
177
+ (r"ᵉʳ", "er"),
178
+ (r"ᵉ", "e"),
179
+ ]:
180
+ text = re.sub(before, after, text)
181
+
182
+ return text
pytorch_model-00001-of-00002.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa5e9e3d5ef64a50dab2150f282a08b4640a8ea933cf268e286c36cf06002388
3
+ size 9951007922
pytorch_model-00002-of-00002.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b4bbe61f3221dba97ad849493c111bb0e7bfcbc5f44bed9f2e489b47794406c
3
+ size 3892501648
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 13843441408
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00001-of-00002.bin",
7
+ "transformer.h.0.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
8
+ "transformer.h.0.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
9
+ "transformer.h.0.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
10
+ "transformer.h.0.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
11
+ "transformer.h.0.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
12
+ "transformer.h.0.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
13
+ "transformer.h.1.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
14
+ "transformer.h.1.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
15
+ "transformer.h.1.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
16
+ "transformer.h.1.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
17
+ "transformer.h.1.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
18
+ "transformer.h.1.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
19
+ "transformer.h.10.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
20
+ "transformer.h.10.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
21
+ "transformer.h.10.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
22
+ "transformer.h.10.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
23
+ "transformer.h.10.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
24
+ "transformer.h.10.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
25
+ "transformer.h.11.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
26
+ "transformer.h.11.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
27
+ "transformer.h.11.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
28
+ "transformer.h.11.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
29
+ "transformer.h.11.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
30
+ "transformer.h.11.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
31
+ "transformer.h.12.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
32
+ "transformer.h.12.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
33
+ "transformer.h.12.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
34
+ "transformer.h.12.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
35
+ "transformer.h.12.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
36
+ "transformer.h.12.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
37
+ "transformer.h.13.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
38
+ "transformer.h.13.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
39
+ "transformer.h.13.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
40
+ "transformer.h.13.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
41
+ "transformer.h.13.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
42
+ "transformer.h.13.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
43
+ "transformer.h.14.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
44
+ "transformer.h.14.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
45
+ "transformer.h.14.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
46
+ "transformer.h.14.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
47
+ "transformer.h.14.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
48
+ "transformer.h.14.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
49
+ "transformer.h.15.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
50
+ "transformer.h.15.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
51
+ "transformer.h.15.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
52
+ "transformer.h.15.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
53
+ "transformer.h.15.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
54
+ "transformer.h.15.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
55
+ "transformer.h.16.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
56
+ "transformer.h.16.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
57
+ "transformer.h.16.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
58
+ "transformer.h.16.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
59
+ "transformer.h.16.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
60
+ "transformer.h.16.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
61
+ "transformer.h.17.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
62
+ "transformer.h.17.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
63
+ "transformer.h.17.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
64
+ "transformer.h.17.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
65
+ "transformer.h.17.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
66
+ "transformer.h.17.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
67
+ "transformer.h.18.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
68
+ "transformer.h.18.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
69
+ "transformer.h.18.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
70
+ "transformer.h.18.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
71
+ "transformer.h.18.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
72
+ "transformer.h.18.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
73
+ "transformer.h.19.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
74
+ "transformer.h.19.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
75
+ "transformer.h.19.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
76
+ "transformer.h.19.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
77
+ "transformer.h.19.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
78
+ "transformer.h.19.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
79
+ "transformer.h.2.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
80
+ "transformer.h.2.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
81
+ "transformer.h.2.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
82
+ "transformer.h.2.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
83
+ "transformer.h.2.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
84
+ "transformer.h.2.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
85
+ "transformer.h.20.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
86
+ "transformer.h.20.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
87
+ "transformer.h.20.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
88
+ "transformer.h.20.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
89
+ "transformer.h.20.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
90
+ "transformer.h.20.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
91
+ "transformer.h.21.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
92
+ "transformer.h.21.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
93
+ "transformer.h.21.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
94
+ "transformer.h.21.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
95
+ "transformer.h.21.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
96
+ "transformer.h.21.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
97
+ "transformer.h.22.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
98
+ "transformer.h.22.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
99
+ "transformer.h.22.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
100
+ "transformer.h.22.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
101
+ "transformer.h.22.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
102
+ "transformer.h.22.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
103
+ "transformer.h.23.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
104
+ "transformer.h.23.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
105
+ "transformer.h.23.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
106
+ "transformer.h.23.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
107
+ "transformer.h.23.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
108
+ "transformer.h.23.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
109
+ "transformer.h.24.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
110
+ "transformer.h.24.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
111
+ "transformer.h.24.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
112
+ "transformer.h.24.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
113
+ "transformer.h.24.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
114
+ "transformer.h.24.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
115
+ "transformer.h.25.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
116
+ "transformer.h.25.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
117
+ "transformer.h.25.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
118
+ "transformer.h.25.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
119
+ "transformer.h.25.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
120
+ "transformer.h.25.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
121
+ "transformer.h.26.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
122
+ "transformer.h.26.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
123
+ "transformer.h.26.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
124
+ "transformer.h.26.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
125
+ "transformer.h.26.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
126
+ "transformer.h.26.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
127
+ "transformer.h.27.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
128
+ "transformer.h.27.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
129
+ "transformer.h.27.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
130
+ "transformer.h.27.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
131
+ "transformer.h.27.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
132
+ "transformer.h.27.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
133
+ "transformer.h.28.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
134
+ "transformer.h.28.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
135
+ "transformer.h.28.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
136
+ "transformer.h.28.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
137
+ "transformer.h.28.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
138
+ "transformer.h.28.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
139
+ "transformer.h.29.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
140
+ "transformer.h.29.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
141
+ "transformer.h.29.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
142
+ "transformer.h.29.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
143
+ "transformer.h.29.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
144
+ "transformer.h.29.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
145
+ "transformer.h.3.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
146
+ "transformer.h.3.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
147
+ "transformer.h.3.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
148
+ "transformer.h.3.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
149
+ "transformer.h.3.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
150
+ "transformer.h.3.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
151
+ "transformer.h.30.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
152
+ "transformer.h.30.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
153
+ "transformer.h.30.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
154
+ "transformer.h.30.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
155
+ "transformer.h.30.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
156
+ "transformer.h.30.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
157
+ "transformer.h.31.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
158
+ "transformer.h.31.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
159
+ "transformer.h.31.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
160
+ "transformer.h.31.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
161
+ "transformer.h.31.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
162
+ "transformer.h.31.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
163
+ "transformer.h.4.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
164
+ "transformer.h.4.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
165
+ "transformer.h.4.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
166
+ "transformer.h.4.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
167
+ "transformer.h.4.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
168
+ "transformer.h.4.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
169
+ "transformer.h.5.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
170
+ "transformer.h.5.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
171
+ "transformer.h.5.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
172
+ "transformer.h.5.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
173
+ "transformer.h.5.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
174
+ "transformer.h.5.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
175
+ "transformer.h.6.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
176
+ "transformer.h.6.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
177
+ "transformer.h.6.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
178
+ "transformer.h.6.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
179
+ "transformer.h.6.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
180
+ "transformer.h.6.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
181
+ "transformer.h.7.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
182
+ "transformer.h.7.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
183
+ "transformer.h.7.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
184
+ "transformer.h.7.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
185
+ "transformer.h.7.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
186
+ "transformer.h.7.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
187
+ "transformer.h.8.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
188
+ "transformer.h.8.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
189
+ "transformer.h.8.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
190
+ "transformer.h.8.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
191
+ "transformer.h.8.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
192
+ "transformer.h.8.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
193
+ "transformer.h.9.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
194
+ "transformer.h.9.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
195
+ "transformer.h.9.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
196
+ "transformer.h.9.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
197
+ "transformer.h.9.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
198
+ "transformer.h.9.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
199
+ "transformer.ln_f.bias": "pytorch_model-00002-of-00002.bin",
200
+ "transformer.ln_f.weight": "pytorch_model-00002-of-00002.bin",
201
+ "transformer.word_embeddings.weight": "pytorch_model-00001-of-00002.bin"
202
+ }
203
+ }
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ transformers>=4.34.0
2
+ accelerate>=0.20.3
3
+ bitsandbytes
4
+ einops
special_tokens_map.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ ">>TITLE<<",
4
+ ">>ABSTRACT<<",
5
+ ">>INTRODUCTION<<",
6
+ ">>SUMMARY<<",
7
+ ">>COMMENT<<",
8
+ ">>ANSWER<<",
9
+ ">>QUESTION<<",
10
+ ">>DOMAIN<<",
11
+ ">>PREFIX<<",
12
+ ">>SUFFIX<<",
13
+ ">>MIDDLE<<"
14
+ ],
15
+ "eos_token": "<|endoftext|>"
16
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "eos_token": "<|endoftext|>",
4
+ "model_input_names": [
5
+ "input_ids",
6
+ "attention_mask"
7
+ ],
8
+ "model_max_length": 2048,
9
+ "name_or_path": "tiiuae/falcon_tokenizer",
10
+ "special_tokens_map_file": null,
11
+ "tokenizer_class": "PreTrainedTokenizerFast"
12
+ }