Davidqian123 commited on
Commit
ee7aff1
·
verified ·
1 Parent(s): edbdc2b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -8,21 +8,20 @@ tags:
8
  - llama
9
  - llama-3
10
  - meta
 
11
  ---
12
 
13
  # DeepSeek-R1-Distill-Llama-8B-NexaQuant
14
 
15
  ## Background + Overview
 
16
  DeepSeek-R1 has been making headlines for rivaling OpenAI’s O1 reasoning model while remaining fully open-source. Many users want to run it locally to ensure data privacy, reduce latency, and maintain offline access. However, fitting such a large model onto personal devices typically requires quantization (e.g. Q4_K_M), which often sacrifices accuracy (up to ~22% accuracy loss) and undermines the benefits of the local reasoning model.
17
 
18
  We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to one-fourth its original size—without losing any accuracy. This lets you run powerful on-device reasoning wherever you are, with no compromises. Tests on an **HP Omnibook AIPC** with an **AMD Ryzen™ AI 9 HX 370 processor** showed a decoding speed of **66.40 tokens per second** and a peak RAM usage of just **1228 MB** in NexaQuant version—compared to only **25.28 tokens** per second and **3788 MB RAM** in the unquantized version—while **maintaining full precision model accuracy.**
19
 
20
- ---
21
-
22
  ## How to run locally
23
- NexaQuant is compatible with **Nexa-SDK**, **Ollama**, **LM Studio**, **Llama.cpp**, and any llama.cpp based project.
24
 
25
- Below, we outline multiple ways to run the model locally.
26
 
27
  #### Option 1: Using Nexa SDK
28
 
 
8
  - llama
9
  - llama-3
10
  - meta
11
+ - GGUF
12
  ---
13
 
14
  # DeepSeek-R1-Distill-Llama-8B-NexaQuant
15
 
16
  ## Background + Overview
17
+
18
  DeepSeek-R1 has been making headlines for rivaling OpenAI’s O1 reasoning model while remaining fully open-source. Many users want to run it locally to ensure data privacy, reduce latency, and maintain offline access. However, fitting such a large model onto personal devices typically requires quantization (e.g. Q4_K_M), which often sacrifices accuracy (up to ~22% accuracy loss) and undermines the benefits of the local reasoning model.
19
 
20
  We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to one-fourth its original size—without losing any accuracy. This lets you run powerful on-device reasoning wherever you are, with no compromises. Tests on an **HP Omnibook AIPC** with an **AMD Ryzen™ AI 9 HX 370 processor** showed a decoding speed of **66.40 tokens per second** and a peak RAM usage of just **1228 MB** in NexaQuant version—compared to only **25.28 tokens** per second and **3788 MB RAM** in the unquantized version—while **maintaining full precision model accuracy.**
21
 
 
 
22
  ## How to run locally
 
23
 
24
+ NexaQuant is compatible with **Nexa-SDK**, **Ollama**, **LM Studio**, **Llama.cpp**, and any llama.cpp based project. Below, we outline multiple ways to run the model locally.
25
 
26
  #### Option 1: Using Nexa SDK
27