hoanganhpham commited on
Commit
0593101
·
verified ·
1 Parent(s): 4bc3cb2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -12
README.md CHANGED
@@ -12,15 +12,42 @@ Static quants of https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706
12
 
13
  | Link | Type | Size/GB | Notes |
14
  |:-----|:-----|--------:|:------|
15
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.Q2_K.gguf) | Q2_K | 3.4 | |
16
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
17
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | |
18
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
19
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
20
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | |
21
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | |
22
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
23
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
24
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.Q6_K.gguf) | Q6_K | 6.8 | |
25
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.Q8_0.gguf) | Q8_0 | 8.8 | |
26
- | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B.f16.gguf) | f16 | 16.5 | |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  | Link | Type | Size/GB | Notes |
14
  |:-----|:-----|--------:|:------|
15
+ | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B-1706.Q2_K.gguf) | Q2_K | 3.4 | |
16
+ | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B-1706.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
17
+ | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B-1706.Q3_K_M.gguf) | Q3_K_M | 4.2 | |
18
+ | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B-1706.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
19
+ | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B-1706.Q4_K_S.gguf) | Q4_K_S | 4.9 | |
20
+ | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B-1706.Q4_K_M.gguf) | Q4_K_M | 5.1 | |
21
+ | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B-1706.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
22
+ | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B-1706.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
23
+ | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B-1706.Q6_K.gguf) | Q6_K | 6.8 | |
24
+ | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B-1706.Q8_0.gguf) | Q8_0 | 8.8 | |
25
+ | [GGUF](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706-GGUF/resolve/main/II-Medical-8B-1706.f16.gguf) | F16 | 16.5 | |
26
+
27
+
28
+ ## Running II-Medical-8B-1706
29
+ ```bash
30
+ apt-get update
31
+ apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
32
+ git clone https://github.com/ggml-org/llama.cpp
33
+ cmake llama.cpp -B llama.cpp/build \
34
+ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
35
+ cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split
36
+ cp llama.cpp/build/bin/llama-* llama.cpp
37
+ ```
38
+
39
+ ```bash
40
+ ./llama.cpp/llama-cli \
41
+ --model path/to/II-Medical-8B-1706.Q8_0.gguf \
42
+ --threads 32 \
43
+ --ctx-size 16384 \
44
+ --n-gpu-layers 99 \
45
+ -ot ".ffn_.*_exps.=CPU" \
46
+ --seed 3407 \
47
+ --prio 3 \
48
+ --temp 0.6 \
49
+ --min-p 0.01 \
50
+ --top-p 0.9 \
51
+ -no-cnv \
52
+ --prompt "<|im_start|>user\nI'm feeling unwell. Please help. <|im_end|>\n<|im_start|>assistant\n
53
+ ```