Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,12 @@ The model has been quantized to 8 bits to:
|
|
18 |
|
19 |
Below is an example script demonstrating how to load the 8-bit quantized model, perform translation, and decode the output:
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
```python
|
22 |
import torch
|
23 |
import transformers
|
|
|
18 |
|
19 |
Below is an example script demonstrating how to load the 8-bit quantized model, perform translation, and decode the output:
|
20 |
|
21 |
+
make sure to install bits and bytes
|
22 |
+
|
23 |
+
```
|
24 |
+
pip install -U bitsandbytes
|
25 |
+
```
|
26 |
+
|
27 |
```python
|
28 |
import torch
|
29 |
import transformers
|