⚠️ Note:

These model weights are for personal testing purposes only. The goal is to find a quantization method that achieves high compression while preserving as much of the model's original performance as possible. The current compression scheme may not be optimal, so please use these weights with caution.

Creation

This model was created by applying the bitsandbytes and transformers as presented in the code snipet below.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

model_name = "deepseek-ai/DeepSeek-V2-Lite-Chat"
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",  # NF4 for weight
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=torch.bfloat16  # bnb supports bfloat16
)

bnb_model = AutoModelForCausalLM.from_pretrained(
    model_name,
    trust_remote_code=True,
    torch_dtype=torch.bfloat16,
    quantization_config=bnb_config
)

tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True)

bnb_model.push_to_hub("basicv8vc/DeepSeek-V2-Lite-Chat-bnb-4bit")
tokenizer.push_to_hub("basicv8vc/DeepSeek-V2-Lite-Chat-bnb-4bit")

Sources

https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat

Downloads last month
24
Safetensors
Model size
8.31B params
Tensor type
F32
·
BF16
·
U8
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support model that require custom code execution.

Model tree for basicv8vc/DeepSeek-V2-Lite-Chat-bnb-4bit

Quantized
(33)
this model