Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
YokaiKoibito
/
llama2_70b_chat_uncensored-GGUF
like
4
GGUF
ehartford/wizard_vicuna_70k_unfiltered
uncensored
wizard
vicuna
llama
License:
llama2
Model card
Files
Files and versions
Community
Deploy
Use this model
58266ce
llama2_70b_chat_uncensored-GGUF
2 contributors
History:
11 commits
YokaiKoibito
Add q6_K, q8_0, and f16 as split files due to 50GB limit
58266ce
over 1 year ago
.gitattributes
Safe
1.61 kB
Quantized files
over 1 year ago
README.md
3.46 kB
Update README.md
over 1 year ago
llama2_70b_chat_uncensored-Q2_K.gguf
29.3 GB
LFS
Quantized files
over 1 year ago
llama2_70b_chat_uncensored-Q3_K_L.gguf
36.1 GB
LFS
Quantized files
over 1 year ago
llama2_70b_chat_uncensored-Q3_K_M.gguf
33.2 GB
LFS
Quantized files
over 1 year ago
llama2_70b_chat_uncensored-Q3_K_S.gguf
29.9 GB
LFS
Quantized files
over 1 year ago
llama2_70b_chat_uncensored-Q4_K_M.gguf
41.4 GB
LFS
Quantized files
over 1 year ago
llama2_70b_chat_uncensored-Q4_K_S.gguf
39.1 GB
LFS
Quantized files
over 1 year ago
llama2_70b_chat_uncensored-Q5_K_M.gguf
48.8 GB
LFS
Quantized files
over 1 year ago
llama2_70b_chat_uncensored-Q5_K_S.gguf
47.5 GB
LFS
Quantized files
over 1 year ago
llama2_70b_chat_uncensored-Q6_K.gguf-split-a
49.4 GB
LFS
Add q6_K, q8_0, and f16 as split files due to 50GB limit
over 1 year ago
llama2_70b_chat_uncensored-Q6_K.gguf-split-b
7.2 GB
LFS
Add q6_K, q8_0, and f16 as split files due to 50GB limit
over 1 year ago
llama2_70b_chat_uncensored-Q8_0.gguf-split-a
49.4 GB
LFS
Add q6_K, q8_0, and f16 as split files due to 50GB limit
over 1 year ago
llama2_70b_chat_uncensored-Q8_0.gguf-split-b'
23.9 GB
LFS
Add q6_K, q8_0, and f16 as split files due to 50GB limit
over 1 year ago
llama2_70b_chat_uncensored-f16.gguf-split-a
49.4 GB
LFS
Quantized files
over 1 year ago
llama2_70b_chat_uncensored-f16.gguf-split-b
49.4 GB
LFS
Quantized files
over 1 year ago
llama2_70b_chat_uncensored-f16.gguf-split-c
39.2 GB
LFS
Quantized files
over 1 year ago