About
Static quants of https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706
Provided Quants
(sorted by size, not necessarily quality)
Link | Type | Size/GB | Notes |
---|---|---|---|
GGUF | Q2_K | 3.4 | |
GGUF | Q3_K_S | 3.9 | |
GGUF | Q3_K_M | 4.2 | |
GGUF | Q3_K_L | 4.5 | |
GGUF | Q4_K_S | 4.9 | |
GGUF | Q4_K_M | 5.1 | |
GGUF | Q5_K_S | 5.8 | |
GGUF | Q5_K_M | 6.0 | |
GGUF | Q6_K | 6.8 | |
GGUF | Q8_0 | 8.8 | |
GGUF | F16 | 16.5 |
Running II-Medical-8B-1706
apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
-DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
./llama.cpp/llama-cli \
--model path/to/II-Medical-8B-1706.Q8_0.gguf \
--threads 32 \
--ctx-size 16384 \
--n-gpu-layers 99 \
-ot ".ffn_.*_exps.=CPU" \
--seed 3407 \
--prio 3 \
--temp 0.6 \
--min-p 0.01 \
--top-p 0.9 \
-no-cnv \
--prompt "<|im_start|>user\nI'm feeling unwell. Please help. <|im_end|>\n<|im_start|>assistant\n
- Downloads last month
- 415
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Intelligent-Internet/II-Medical-8B-1706-GGUF
Base model
Intelligent-Internet/II-Medical-8B-1706