Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ language:
|
|
27 |
|
28 |
- LlamaEdge version: coming soon
|
29 |
|
30 |
-
<!-- - LlamaEdge version: [v0.14.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.0) and above
|
31 |
|
32 |
- Prompt template
|
33 |
|
@@ -47,6 +47,8 @@ language:
|
|
47 |
<|assistant|>
|
48 |
```
|
49 |
|
|
|
|
|
50 |
- Context size: `128000`
|
51 |
|
52 |
- Run as LlamaEdge service
|
@@ -59,7 +61,7 @@ language:
|
|
59 |
--ctx-size 128000 \
|
60 |
--llava-mmproj mmproj-model-f16.gguf \
|
61 |
--model-name minicpmv-26
|
62 |
-
|
63 |
|
64 |
## Quantized GGUF Models
|
65 |
|
|
|
27 |
|
28 |
- LlamaEdge version: coming soon
|
29 |
|
30 |
+
<!-- - LlamaEdge version: [v0.14.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.0) and above -->
|
31 |
|
32 |
- Prompt template
|
33 |
|
|
|
47 |
<|assistant|>
|
48 |
```
|
49 |
|
50 |
+
The `{user_message_n}` has the format: `{image_base64_encoding_string}\n{user_question}`.
|
51 |
+
|
52 |
- Context size: `128000`
|
53 |
|
54 |
- Run as LlamaEdge service
|
|
|
61 |
--ctx-size 128000 \
|
62 |
--llava-mmproj mmproj-model-f16.gguf \
|
63 |
--model-name minicpmv-26
|
64 |
+
```
|
65 |
|
66 |
## Quantized GGUF Models
|
67 |
|