LLM Compiler
Collection
4 items
•
Updated
Llama.cpp imatrix quantization of facebook/llm-compiler-13b
Original Model: facebook/llm-compiler-13b
Original dtype: BF16
(bfloat16
)
Quantized by: llama.cpp b3256
IMatrix dataset: here
Status: ✅ Available
Link: here
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
llm-compiler-13b.Q8_0.gguf | Q8_0 | 13.83GB | ✅ Available | ⚪ Static | 📦 No |
llm-compiler-13b.Q6_K.gguf | Q6_K | 10.68GB | ✅ Available | ⚪ Static | 📦 No |
llm-compiler-13b.Q4_K.gguf | Q4_K | 7.87GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.Q3_K.gguf | Q3_K | 6.34GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.Q2_K.gguf | Q2_K | 4.85GB | ✅ Available | 🟢 IMatrix | 📦 No |
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
llm-compiler-13b.BF16.gguf | BF16 | 26.03GB | ✅ Available | ⚪ Static | 📦 No |
llm-compiler-13b.FP16.gguf | F16 | 26.03GB | ✅ Available | ⚪ Static | 📦 No |
llm-compiler-13b.Q8_0.gguf | Q8_0 | 13.83GB | ✅ Available | ⚪ Static | 📦 No |
llm-compiler-13b.Q6_K.gguf | Q6_K | 10.68GB | ✅ Available | ⚪ Static | 📦 No |
llm-compiler-13b.Q5_K.gguf | Q5_K | 9.23GB | ✅ Available | ⚪ Static | 📦 No |
llm-compiler-13b.Q5_K_S.gguf | Q5_K_S | 8.97GB | ✅ Available | ⚪ Static | 📦 No |
llm-compiler-13b.Q4_K.gguf | Q4_K | 7.87GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.Q4_K_S.gguf | Q4_K_S | 7.42GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ4_NL.gguf | IQ4_NL | 7.37GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ4_XS.gguf | IQ4_XS | 6.96GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.Q3_K.gguf | Q3_K | 6.34GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.Q3_K_L.gguf | Q3_K_L | 6.93GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.Q3_K_S.gguf | Q3_K_S | 5.66GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ3_M.gguf | IQ3_M | 5.98GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ3_S.gguf | IQ3_S | 5.66GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ3_XS.gguf | IQ3_XS | 5.36GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ3_XXS.gguf | IQ3_XXS | 4.96GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.Q2_K.gguf | Q2_K | 4.85GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.Q2_K_S.gguf | Q2_K_S | 4.44GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ2_M.gguf | IQ2_M | 4.52GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ2_S.gguf | IQ2_S | 4.20GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ2_XS.gguf | IQ2_XS | 3.89GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ2_XXS.gguf | IQ2_XXS | 3.54GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ1_M.gguf | IQ1_M | 3.14GB | ✅ Available | 🟢 IMatrix | 📦 No |
llm-compiler-13b.IQ1_S | IQ1_S | - | ⏳ Processing | 🟢 IMatrix | - |
If you do not have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/llm-compiler-13b-IMat-GGUF --include "llm-compiler-13b.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/llm-compiler-13b-IMat-GGUF --include "llm-compiler-13b.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
llama.cpp/main -m llm-compiler-13b.Q8_0.gguf --color -i -p "prompt here"
According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
gguf-split
availablegguf-split
, navigate to https://github.com/ggerganov/llama.cpp/releasesgguf-split
llm-compiler-13b.Q8_0
)gguf-split --merge llm-compiler-13b.Q8_0/llm-compiler-13b.Q8_0-00001-of-XXXXX.gguf llm-compiler-13b.Q8_0.gguf
gguf-split
to the first chunk of the split.Got a suggestion? Ping me @legraphista!
Base model
facebook/llm-compiler-13b