ZeroWw commited on
Commit
25444db
·
verified ·
1 Parent(s): 05f657c

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. DeepSeek-V2-Lite-Chat.q8_p.gguf +3 -0
  3. README.md +1 -1
.gitattributes CHANGED
@@ -37,3 +37,4 @@ DeepSeek-V2-Lite-Chat.f16.gguf filter=lfs diff=lfs merge=lfs -text
37
  DeepSeek-V2-Lite-Chat.q5_k.gguf filter=lfs diff=lfs merge=lfs -text
38
  DeepSeek-V2-Lite-Chat.q6_k.gguf filter=lfs diff=lfs merge=lfs -text
39
  DeepSeek-V2-Lite-Chat.q8_0.gguf filter=lfs diff=lfs merge=lfs -text
 
 
37
  DeepSeek-V2-Lite-Chat.q5_k.gguf filter=lfs diff=lfs merge=lfs -text
38
  DeepSeek-V2-Lite-Chat.q6_k.gguf filter=lfs diff=lfs merge=lfs -text
39
  DeepSeek-V2-Lite-Chat.q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ DeepSeek-V2-Lite-Chat.q8_p.gguf filter=lfs diff=lfs merge=lfs -text
DeepSeek-V2-Lite-Chat.q8_p.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:458d8dbb5c64109623751f3c7e691f285770a6521bf06bf86172980b995b3bde
3
+ size 16702517568
README.md CHANGED
@@ -13,4 +13,4 @@ Result:
13
  both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
14
  and they perform as well as the pure f16.
15
 
16
- Updated on: Fri Jul 12, 12:43:21
 
13
  both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
14
  and they perform as well as the pure f16.
15
 
16
+ Updated on: Fri Jul 12, 13:14:04