huihui-ai/DeepSeek-V3-bf16
This model converted from DeepSeek-V3 to BF16.
Here we simply provide the conversion command and related information about ollama.
If needed, we can upload the bf16 version.
FP8 to BF16
- Download deepseek-ai/DeepSeek-V3 model, requires approximately 641GB of space.
cd /home/admin/models
huggingface-cli download deepseek-ai/DeepSeek-V3 --local-dir ./deepseek-ai/DeepSeek-V3
- Create the environment.
conda create -yn DeepSeek-V3 python=3.12
conda activate DeepSeek-V3
pip install -r requirements.txt
- Convert to BF16, requires an additional approximately 1.3 TB of space.
cd deepseek-ai/DeepSeek-V3/inference
python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3-bf16
BF16 to gguf
- Use the llama.cpp (Download the latest version) conversion program to convert DeepSeek-V3-bf16 to gguf format, requires an additional approximately 1.3 TB of space.
python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-V3-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf --outtype f16
- Use the llama.cpp quantitative program to quantitative model (llama-quantize needs to be compiled), other quant option. Convert first Q2_K, requires an additional approximately 227 GB of space.
llama-quantize /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-Q2_K.gguf Q2_K
- Use llama-cli to test, llama-cli needs to be compiled.
llama-cli -m /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-Q2_K.gguf -n 2048
Use with ollama
Note: this model requires Ollama 0.5.5
You can use huihui_ai/deepseek-v3:671b-q2_K directly
ollama run huihui_ai/deepseek-v3:671b-q2_K
- Downloads last month
- 0
Model tree for huihui-ai/DeepSeek-V3-bf16
Base model
deepseek-ai/DeepSeek-V3