ibalampanis commited on
Commit
557325d
·
verified ·
1 Parent(s): 663a5f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -25,12 +25,13 @@ This repository contains GGUF format model files for [ilsp's Meltemi 7B Instruct
25
  <!-- README_GGUF.md-provided-files start -->
26
 
27
  ## Provided files
28
-
29
- | Name | Quantization Method | Precision (Bits) | File Size | Max RAM Required | Use Case |
30
- | --------------------------------------------------------------------------------------------------------------------------------------- | ------------------- | ---------------- | --------- | ---------------- | ------------------------------------------------------------- |
31
- | [meltemi-7b-instruct-v1_q8_0.gguf](https://huggingface.co/SPAHE/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-7b-instruct-v1_q8_0.gguf) | Q8_0 | 8 | 7.95 GB | 7.30 GB | Low quality loss - recommended |
32
- | [meltemi-7b-instruct-v1_f16.gguf](https://huggingface.co/SPAHE/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-7b-instruct-v1_f16.gguf) | F16 | 16 | 15.00 GB | 14.20 GB | Very large, extremely low quality loss - recommended |
33
- | [meltemi-7b-instruct-v1_f32.gguf](https://huggingface.co/SPAHE/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-7b-instruct-v1_f32.gguf) | F32 | 32 | 27.90 GB | 29.30 GB | Very very large, extremely low quality loss - not recommended |
 
34
 
35
  **Note**: The above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
36
 
 
25
  <!-- README_GGUF.md-provided-files start -->
26
 
27
  ## Provided files
28
+ | Name | Quantization Method | Precision (Bits) | File Size | Max RAM Required | Use Case |
29
+ | ------------------------------------------------------------------------------------------------------------------------------------------- | ------------------- | ---------------- | --------- | ---------------- | ------------------------------------------- |
30
+ | [meltemi-7b-instruct-v1_q4_k_m.gguf](https://huggingface.co/SPAHE/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-7b-instruct-v1_q4_k_m.gguf) | Q4_K_M | 4 | 4.54 GB | 4.41 GB | Medium, balanced quality |
31
+ | [meltemi-7b-instruct-v1_q6_k.gguf](https://huggingface.co/SPAHE/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-7b-instruct-v1_q6_k.gguf) | Q6_K | 6 | 6.14 GB | 5.92 GB | Medium, low quality loss |
32
+ | [meltemi-7b-instruct-v1_q8_0.gguf](https://huggingface.co/SPAHE/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-7b-instruct-v1_q8_0.gguf) | Q8_0 | 8 | 7.95 GB | 7.30 GB | Large, low quality loss |
33
+ | [meltemi-7b-instruct-v1_f16.gguf](https://huggingface.co/SPAHE/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-7b-instruct-v1_f16.gguf) | F16 | 16 | 15.00 GB | 14.20 GB | Very large, extremely low quality loss |
34
+ | [meltemi-7b-instruct-v1_f32.gguf](https://huggingface.co/SPAHE/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-7b-instruct-v1_f32.gguf) | F32 | 32 | 27.90 GB | 29.30 GB | Very very large, extremely low quality loss |
35
 
36
  **Note**: The above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
37