Update README.md
Browse files
README.md
CHANGED
|
@@ -19,7 +19,9 @@ We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to on
|
|
| 19 |
|
| 20 |
---
|
| 21 |
|
| 22 |
-
## How to
|
|
|
|
|
|
|
| 23 |
Below, we outline multiple ways to run the model locally.
|
| 24 |
|
| 25 |
#### Option 1: Using Nexa SDK
|
|
@@ -58,7 +60,7 @@ Get the latest version from the [official website](https://lmstudio.ai/).
|
|
| 58 |
|
| 59 |
**Step 2: Load and Run the Model**
|
| 60 |
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
---
|
|
|
|
| 19 |
|
| 20 |
---
|
| 21 |
|
| 22 |
+
## How to run locally
|
| 23 |
+
NexaQuant is compatible with **Nexa-SDK**, **Ollama**, **LM Studio**, **Llama.cpp**, and any llama.cpp based project.
|
| 24 |
+
|
| 25 |
Below, we outline multiple ways to run the model locally.
|
| 26 |
|
| 27 |
#### Option 1: Using Nexa SDK
|
|
|
|
| 60 |
|
| 61 |
**Step 2: Load and Run the Model**
|
| 62 |
|
| 63 |
+
1. In LM Studio's top panel, search for and select `NexaAIDev/DeepSeek-R1-Distill-Llama-8B-NexaQuant`.
|
| 64 |
+
2. Click `Download` (if not already downloaded) and wait for the model to load.
|
| 65 |
+
3. Once loaded, go to the chat window and start a conversation.
|
| 66 |
---
|