calcuis commited on
Commit
47e7214
·
verified ·
1 Parent(s): 3932a7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -19
README.md CHANGED
@@ -29,37 +29,48 @@ ggc c2
29
  - opt a `vae`, a `clip(encoder)` and a `model` file in the current directory to interact with (see example below)
30
 
31
  >
32
- >GGUF file(s) available. Select which one for ve:
33
  >
34
- >1. t3_cfg-q2_k.gguf
35
- >2. t3_cfg-q4_k_m.gguf
36
- >3. t3_cfg-q6_k.gguf
37
- >4. ve_fp32-f16.gguf
38
- >5. ve_fp32-f32.gguf
 
 
 
39
  >
40
- >Enter your choice (1 to 5): 4
41
  >
42
  >ve file: ve_fp32-f16.gguf is selected!
43
  >
44
- >GGUF file(s) available. Select which one for t3:
45
  >
46
- >1. t3_cfg-q2_k.gguf
47
- >2. t3_cfg-q4_k_m.gguf
48
- >3. t3_cfg-q6_k.gguf
49
- >4. ve_fp32-f16.gguf
50
- >5. ve_fp32-f32.gguf
 
 
 
51
  >
52
- >Enter your choice (1 to 5): 2
53
  >
54
  >t3 file: t3_cfg-q4_k_m.gguf is selected!
55
  >
56
- >Safetensors file(s) available. Select which one for s3gen:
57
  >
58
- >1. s3gen_bf16.safetensors (recommended)
59
- >2. s3gen_fp16.safetensors (for non-cuda user)
60
- >3. s3gen_fp32.safetensors
 
 
 
 
 
61
  >
62
- >Enter your choice (1 to 3): _
63
  >
64
 
65
  - note: for the latest update, only tokenizer will be pulled to cache automatically during the first launch; you need to prepare the **model**, **encoder** and **vae** files yourself, working like [vision](https://huggingface.co/calcuis/llava-gguf) connector right away; mix and match, more flexible
 
29
  - opt a `vae`, a `clip(encoder)` and a `model` file in the current directory to interact with (see example below)
30
 
31
  >
32
+ >GGUF file(s) available. Select which one for **ve**:
33
  >
34
+ >1. s3gen-bf16.gguf
35
+ >2. s3gen-f16.gguf
36
+ >3. s3gen-f32.gguf
37
+ >4. t3_cfg-q2_k.gguf
38
+ >5. t3_cfg-q4_k_m.gguf
39
+ >6. t3_cfg-q6_k.gguf
40
+ >7. ve_fp32-f16.gguf (recommended)
41
+ >8. ve_fp32-f32.gguf
42
  >
43
+ >Enter your choice (1 to 8): 7
44
  >
45
  >ve file: ve_fp32-f16.gguf is selected!
46
  >
47
+ >GGUF file(s) available. Select which one for **t3**:
48
  >
49
+ >1. s3gen-bf16.gguf
50
+ >2. s3gen-f16.gguf
51
+ >3. s3gen-f32.gguf
52
+ >4. t3_cfg-q2_k.gguf
53
+ >5. t3_cfg-q4_k_m.gguf (recommended)
54
+ >6. t3_cfg-q6_k.gguf
55
+ >7. ve_fp32-f16.gguf
56
+ >8. ve_fp32-f32.gguf
57
  >
58
+ >Enter your choice (1 to 8): 5
59
  >
60
  >t3 file: t3_cfg-q4_k_m.gguf is selected!
61
  >
62
+ >GGUF file(s) available. Select which one for **s3gen**:
63
  >
64
+ >1. s3gen-bf16.gguf (recommended)
65
+ >2. s3gen-f16.gguf (for non-cuda user)
66
+ >3. s3gen-f32.gguf
67
+ >4. t3_cfg-q2_k.gguf
68
+ >5. t3_cfg-q4_k_m.gguf
69
+ >6. t3_cfg-q6_k.gguf
70
+ >7. ve_fp32-f16.gguf
71
+ >8. ve_fp32-f32.gguf
72
  >
73
+ >Enter your choice (1 to 8): _
74
  >
75
 
76
  - note: for the latest update, only tokenizer will be pulled to cache automatically during the first launch; you need to prepare the **model**, **encoder** and **vae** files yourself, working like [vision](https://huggingface.co/calcuis/llava-gguf) connector right away; mix and match, more flexible