Update README.md
Browse files
README.md
CHANGED
@@ -53,13 +53,10 @@ pipeline_tag: text-generation
|
|
53 |
|
54 |
Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface.co/hivemind/gpt-j-6B-8bit), this is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom-6b3) a ~6 billion parameters language model that you run and fine-tune with less memory.
|
55 |
|
56 |
-
Here, we also apply [LoRA (Low Rank Adaptation)](https://arxiv.org/abs/2106.09685) to reduce model size.
|
57 |
-
|
58 |
-
Our main goal is to generate a model compressed enough to be deployed in a traditional Kubernetes cluster.
|
59 |
|
60 |
### How to fine-tune
|
61 |
-
|
62 |
-
In this [notebook](https://nbviewer.org/urls/huggingface.co/joaoalvarenga/bloom-8bit/raw/main/fine-tuning-example.ipynb) you can find an adaptation from [Hivemind's GPT-J 8-bit fine-tuning notebook](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) to fine-tune Bloom 8-bit with a 3x NVIDIA A100 instance.
|
63 |
|
64 |
### How to use
|
65 |
|
@@ -222,10 +219,4 @@ tokenizer = BloomTokenizerFast.from_pretrained(model_name)
|
|
222 |
prompt = tokenizer("Given a table named salaries and columns id, created_at, salary, age. Creates a SQL to answer What is the average salary for 22 years old:", return_tensors='pt')
|
223 |
out = model.generate(**prompt, min_length=10, do_sample=True)
|
224 |
tokenizer.decode(out[0])
|
225 |
-
```
|
226 |
-
|
227 |
-
|
228 |
-
|
229 |
-
|
230 |
-
|
231 |
-
|
|
|
53 |
|
54 |
Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface.co/hivemind/gpt-j-6B-8bit), this is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom-6b3) a ~6 billion parameters language model that you run and fine-tune with less memory.
|
55 |
|
56 |
+
Here, we also apply [LoRA (Low Rank Adaptation)](https://arxiv.org/abs/2106.09685) to reduce model size.
|
|
|
|
|
57 |
|
58 |
### How to fine-tune
|
59 |
+
TBA
|
|
|
60 |
|
61 |
### How to use
|
62 |
|
|
|
219 |
prompt = tokenizer("Given a table named salaries and columns id, created_at, salary, age. Creates a SQL to answer What is the average salary for 22 years old:", return_tensors='pt')
|
220 |
out = model.generate(**prompt, min_length=10, do_sample=True)
|
221 |
tokenizer.decode(out[0])
|
222 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|