Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ Invoke the llama.cpp server or the CLI.
|
|
21 |
|
22 |
### CLI:
|
23 |
```bash
|
24 |
-
llama-cli --hf-repo scb10x/typhoon-translate-4b-gguf --hf-file typhoon-translate-4b-q4_k_m.gguf -p "
|
25 |
```
|
26 |
|
27 |
### Server:
|
@@ -43,7 +43,7 @@ cd llama.cpp && LLAMA_CURL=1 make
|
|
43 |
|
44 |
Step 3: Run inference through the main binary.
|
45 |
```
|
46 |
-
./llama-cli --hf-repo scb10x/typhoon-translate-4b-gguf --hf-file typhoon-translate-4b-q4_k_m.gguf -p "
|
47 |
```
|
48 |
or
|
49 |
```
|
|
|
21 |
|
22 |
### CLI:
|
23 |
```bash
|
24 |
+
llama-cli --hf-repo scb10x/typhoon-translate-4b-gguf --hf-file typhoon-translate-4b-q4_k_m.gguf -p "Translate the following text into Thai.\n\nWhat is machine learning?"
|
25 |
```
|
26 |
|
27 |
### Server:
|
|
|
43 |
|
44 |
Step 3: Run inference through the main binary.
|
45 |
```
|
46 |
+
./llama-cli --hf-repo scb10x/typhoon-translate-4b-gguf --hf-file typhoon-translate-4b-q4_k_m.gguf -p "Translate the following text into Thai.\n\nWhat is machine learning?"
|
47 |
```
|
48 |
or
|
49 |
```
|