Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,7 @@ Chinda Opensource Thai LLM 4B is iApp Technology's cutting-edge Thai language mo
|
|
22 |
|
23 |
- **🌐 Demo:** [https://chindax.iapp.co.th](https://chindax.iapp.co.th) (Choose ChindaLLM 4b)
|
24 |
- **📦 Model Download:** [https://huggingface.co/iapp/chinda-qwen3-4b](https://huggingface.co/iapp/chinda-qwen3-4b)
|
|
|
25 |
- **🏠 Homepage:** [https://iapp.co.th/products/chinda-opensource-llm](https://iapp.co.th/products/chinda-opensource-llm)
|
26 |
- **📄 License:** Apache 2.0
|
27 |
|
@@ -194,6 +195,45 @@ pip install sglang>=0.4.6.post1
|
|
194 |
python -m sglang.launch_server --model-path iapp/chinda-qwen3-4b --reasoning-parser qwen3
|
195 |
```
|
196 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
197 |
## 🔧 Advanced Configuration
|
198 |
|
199 |
### Processing Long Texts
|
|
|
22 |
|
23 |
- **🌐 Demo:** [https://chindax.iapp.co.th](https://chindax.iapp.co.th) (Choose ChindaLLM 4b)
|
24 |
- **📦 Model Download:** [https://huggingface.co/iapp/chinda-qwen3-4b](https://huggingface.co/iapp/chinda-qwen3-4b)
|
25 |
+
- **🐋 Ollama:** [https://ollama.com/iapp/chinda-qwen3-4b](https://ollama.com/iapp/chinda-qwen3-4b)
|
26 |
- **🏠 Homepage:** [https://iapp.co.th/products/chinda-opensource-llm](https://iapp.co.th/products/chinda-opensource-llm)
|
27 |
- **📄 License:** Apache 2.0
|
28 |
|
|
|
195 |
python -m sglang.launch_server --model-path iapp/chinda-qwen3-4b --reasoning-parser qwen3
|
196 |
```
|
197 |
|
198 |
+
#### Using Ollama (Easy Local Setup)
|
199 |
+
|
200 |
+
**Installation:**
|
201 |
+
```bash
|
202 |
+
# Install Ollama (if not already installed)
|
203 |
+
curl -fsSL https://ollama.com/install.sh | sh
|
204 |
+
|
205 |
+
# Pull Chinda LLM 4B model
|
206 |
+
ollama pull iapp/chinda-qwen3-4b
|
207 |
+
```
|
208 |
+
|
209 |
+
**Basic Usage:**
|
210 |
+
```bash
|
211 |
+
# Start chatting with Chinda LLM
|
212 |
+
ollama run iapp/chinda-qwen3-4b
|
213 |
+
|
214 |
+
# Example conversation
|
215 |
+
ollama run iapp/chinda-qwen3-4b "อธิบายเกี่ยวกับปัญญาประดิษฐ์ให้ฟังหน่อย"
|
216 |
+
```
|
217 |
+
|
218 |
+
**API Server:**
|
219 |
+
```bash
|
220 |
+
# Start Ollama API server
|
221 |
+
ollama serve
|
222 |
+
|
223 |
+
# Use with curl
|
224 |
+
curl http://localhost:11434/api/generate -d '{
|
225 |
+
"model": "iapp/chinda-qwen3-4b",
|
226 |
+
"prompt": "สวัสดีครับ",
|
227 |
+
"stream": false
|
228 |
+
}'
|
229 |
+
```
|
230 |
+
|
231 |
+
**Model Specifications:**<br>
|
232 |
+
- **Size:** 2.5GB (quantized)<br>
|
233 |
+
- **Context Window:** 40K tokens<br>
|
234 |
+
- **Architecture:** Optimized for local deployment<br>
|
235 |
+
- **Performance:** Fast inference on consumer hardware<br>
|
236 |
+
|
237 |
## 🔧 Advanced Configuration
|
238 |
|
239 |
### Processing Long Texts
|