|
--- |
|
library_name: transformers |
|
base_model: |
|
- deepseek-ai/DeepSeek-V2-Lite |
|
--- |
|
|
|
# ⚠️ Note: |
|
|
|
These model weights are for personal testing purposes only. The goal is to find a quantization method that achieves high compression while preserving as much of the model's original performance as possible. The current compression scheme may not be optimal, so please use these weights with caution. |
|
|
|
# Creation |
|
This model was created by applying the ```bitsandbytes``` and ```transformers``` as presented in the code snipet below. |
|
|
|
```Python |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig |
|
|
|
model_name = "deepseek-ai/DeepSeek-V2-Lite" |
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_quant_type="nf4", # NF4 for weight |
|
bnb_4bit_use_double_quant=True, |
|
bnb_4bit_compute_dtype=torch.bfloat16 # bnb supports bfloat16 |
|
) |
|
|
|
bnb_model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
trust_remote_code=True, |
|
torch_dtype=torch.bfloat16, |
|
quantization_config=bnb_config |
|
) |
|
|
|
tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True) |
|
|
|
bnb_model.push_to_hub("basicv8vc/DeepSeek-V2-Lite-bnb-4bit") |
|
tokenizer.push_to_hub("basicv8vc/DeepSeek-V2-Lite-bnb-4bit") |
|
|
|
|
|
``` |
|
|
|
|
|
# Sources |
|
|
|
https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite |