Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Darkhn
/
L3.3-70B-Animus-V1-GGUF
like
0
GGUF
llama.cpp
imatrix
License:
mit
Model card
Files
Files and versions
Community
Deploy
Use this model
b59eb2e
L3.3-70B-Animus-V1-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
19 commits
Darkhn
Add Q5_K_M GGUF quant: L3.3-70B-Animus-V1-Q5_K_M.gguf
b59eb2e
verified
3 days ago
.gitattributes
2.05 kB
Add Q5_K_S GGUF quant: L3.3-70B-Animus-V1-Q5_K_S.gguf
4 days ago
L3.3-70B-Animus-V1-Q3_K_M.gguf
Safe
34.3 GB
LFS
Add Q3_K_M GGUF quant: L3.3-70B-Animus-V1-Q3_K_M.gguf
4 days ago
L3.3-70B-Animus-V1-Q3_K_S.gguf
Safe
30.9 GB
LFS
Add Q3_K_S GGUF quant: L3.3-70B-Animus-V1-Q3_K_S.gguf
4 days ago
L3.3-70B-Animus-V1-Q4_0.gguf
Safe
40 GB
LFS
Add Q4_0 GGUF quant: L3.3-70B-Animus-V1-Q4_0.gguf
4 days ago
L3.3-70B-Animus-V1-Q4_K_M.gguf
Safe
42.5 GB
LFS
Add Q4_K_M GGUF quant: L3.3-70B-Animus-V1-Q4_K_M.gguf
4 days ago
L3.3-70B-Animus-V1-Q4_K_S.gguf
Safe
40.3 GB
LFS
Add Q4_K_S GGUF quant: L3.3-70B-Animus-V1-Q4_K_S.gguf
4 days ago
L3.3-70B-Animus-V1-Q5_K_M.gguf
Safe
49.9 GB
LFS
Add Q5_K_M GGUF quant: L3.3-70B-Animus-V1-Q5_K_M.gguf
3 days ago
L3.3-70B-Animus-V1-Q5_K_S.gguf
Safe
48.7 GB
LFS
Add Q5_K_S GGUF quant: L3.3-70B-Animus-V1-Q5_K_S.gguf
4 days ago
README.md
369 Bytes
Upload README.md with huggingface_hub
4 days ago