--- license: mit library_name: transformers base_model: - deepseek-ai/DeepSeek-R1-0528 tags: - deepseek - transformers --- # huihui-ai/DeepSeek-R1-0528-GGUF This model converted from DeepSeek-R1-0528 to BF16. Here we simply provide the conversion command and related information about ollama. ## FP8 to BF16 1. Download [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) model, requires approximately 641GB of space. ``` cd /home/admin/models huggingface-cli download deepseek-ai/DeepSeek-R1-0528 --local-dir ./deepseek-ai/DeepSeek-R1-0528 ``` 2. Create the environment. ``` conda create -yn DeepSeek-V3 python=3.12 conda activate DeepSeek-V3 pip install -r requirements.txt ``` 3. Convert to BF16, requires an additional approximately 1.3 TB of space. Here, you need to download the transformation code from the "inference" folder of [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) ``` cd deepseek-ai/DeepSeek-V3/inference python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-R1-0528/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16 ``` ## BF16 to f16.gguf 1. Use the [llama.cpp](https://github.com/ggml-org/llama.cpp) conversion program to convert DeepSeek-R1-0528-bf16 to gguf format, requires an additional approximately 1.3 TB of space. ``` python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-f16.gguf --outtype f16 ``` 2. Use the [llama.cpp](https://github.com/ggml-org/llama.cpp) quantitative program to quantitative model (llama-quantize needs to be compiled.), other [quant option](https://github.com/ggml-org/llama.cpp/blob/master/tools/quantize/quantize.cpp). Convert first Q2_K, requires an additional approximately 227 GB of space. ``` llama-quantize /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-f16.gguf /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-Q2_K.gguf Q2_K ``` 3. Use llama-cli to test. ``` llama-cli -m /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-Q2_K.gguf -n 2048 ``` ## Use with ollama **Note:** this model requires [Ollama 0.9](https://github.com/ollama/ollama/releases/tag/v0.9.0) You can use [huihui_ai/deepseek-r1:671b-0528-Q2_K](https://ollama.com/huihui_ai/deepseek-r1:671b-0528-Q2_K) directly ``` ollama run huihui_ai/deepseek-r1:671b-0528-Q2_K ```