Davidqian123 commited on
Commit
edbdc2b
·
verified ·
1 Parent(s): 3b7af59

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -19,7 +19,9 @@ We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to on
19
 
20
  ---
21
 
22
- ## How to Use on Your Device
 
 
23
  Below, we outline multiple ways to run the model locally.
24
 
25
  #### Option 1: Using Nexa SDK
@@ -58,7 +60,7 @@ Get the latest version from the [official website](https://lmstudio.ai/).
58
 
59
  **Step 2: Load and Run the Model**
60
 
61
- 2. In LM Studio's top panel, search for and select `NexaAIDev/DeepSeek-R1-Distill-Llama-8B-NexaQuant`.
62
- 3. Click `Download` (if not already downloaded) and wait for the model to load.
63
- 4. Once loaded, go to the chat window and start a conversation.
64
  ---
 
19
 
20
  ---
21
 
22
+ ## How to run locally
23
+ NexaQuant is compatible with **Nexa-SDK**, **Ollama**, **LM Studio**, **Llama.cpp**, and any llama.cpp based project.
24
+
25
  Below, we outline multiple ways to run the model locally.
26
 
27
  #### Option 1: Using Nexa SDK
 
60
 
61
  **Step 2: Load and Run the Model**
62
 
63
+ 1. In LM Studio's top panel, search for and select `NexaAIDev/DeepSeek-R1-Distill-Llama-8B-NexaQuant`.
64
+ 2. Click `Download` (if not already downloaded) and wait for the model to load.
65
+ 3. Once loaded, go to the chat window and start a conversation.
66
  ---