File size: 1,325 Bytes
e3a703f
 
 
 
 
84969aa
7d666db
e334e47
 
 
7d666db
 
e334e47
7d666db
 
 
e334e47
7d666db
 
 
 
 
 
 
84969aa
7d666db
 
 
 
 
 
84969aa
7d666db
84969aa
7d666db
 
84969aa
 
e3a703f
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
library_name: transformers
base_model:
- deepseek-ai/DeepSeek-V2-Lite
---

# ⚠️ Note: 

These model weights are for personal testing purposes only. The goal is to find a quantization method that achieves high compression while preserving as much of the model's original performance as possible. The current compression scheme may not be optimal, so please use these weights with caution.

# Creation
This model was created by applying the ```bitsandbytes``` and ```transformers``` as presented in the code snipet below.

```Python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

model_name = "deepseek-ai/DeepSeek-V2-Lite"
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",  # NF4 for weight
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=torch.bfloat16  # bnb supports bfloat16
)

bnb_model = AutoModelForCausalLM.from_pretrained(
    model_name,
    trust_remote_code=True,
    torch_dtype=torch.bfloat16,
    quantization_config=bnb_config
)

tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True)

bnb_model.push_to_hub("basicv8vc/DeepSeek-V2-Lite-bnb-4bit")
tokenizer.push_to_hub("basicv8vc/DeepSeek-V2-Lite-bnb-4bit")


```


# Sources

https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite