Update README.md
Browse files
README.md
CHANGED
@@ -108,7 +108,8 @@ print (f"model answer: {answer_content}")
|
|
108 |
|
109 |
### Quantized versions for compact devices
|
110 |
A series of quantized versions for [AM-Thinking-v1](https://huggingface.co/a-m-team/AM-Thinking-v1-gguf) model.
|
111 |
-
For use with [llama.cpp](https://github.com/ggml-org/llama.cpp) and [Ollama](https://github.com/ollama/ollama)
|
|
|
112 |
|
113 |
|
114 |
## 🔧 Post-training pipeline
|
|
|
108 |
|
109 |
### Quantized versions for compact devices
|
110 |
A series of quantized versions for [AM-Thinking-v1](https://huggingface.co/a-m-team/AM-Thinking-v1-gguf) model.
|
111 |
+
For use with [llama.cpp](https://github.com/ggml-org/llama.cpp) and [Ollama](https://github.com/ollama/ollama)
|
112 |
+
is available at [AM-Thinking-v1-gguf](https://huggingface.co/a-m-team/AM-Thinking-v1-gguf).
|
113 |
|
114 |
|
115 |
## 🔧 Post-training pipeline
|